venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Predicting the Generalization Gap in Deep Networks with Margin Distributions
Abstract
Recent research has demonstrated that deep neural networks can perfectly fit randomly labeled data, but with very poor accuracy on held out data. This phenomenon indicates that loss functions such as cross-entropy are not a reliable indicator of generalization. This leads to the crucial question of how generalization gap can be predicted from training data and network parameters. In this paper, we propose such a measure, and conduct extensive empirical studies on how well it can predict the generalization gap. Our measure is based on the concept of margin distribution, which are the distances of training points to the decision boundary. We find that it is necessary to use margin distributions at multiple layers of a deep network. On the CIFAR-10 and the CIFAR-100 datasets, our proposed measure correlates very strongly with the generalization gap. In addition, we find the following other factors to be of importance: normalizing margin values for scale independence, using characterizations of margin distribution rather than just the margin (closest distance to decision boundary), and working in log space instead of linear space (effectively using a product of margins rather than a sum). Our measure can be easily applied to feedforward deep networks with any architecture and may point towards new training loss functions that could enable better generalization.
1 INTRODUCTION
Generalization, the ability of a classifier to perform well on unseen examples, is a desideratum for progress towards real-world deployment of deep neural networks in domains such as autonomous cars and healthcare. Until recently, it was commonly believed that deep networks generalize well to unseen examples. This was based on empirical evidence about performance on held-out dataset. However, new research has started to question this assumption. Adversarial examples cause networks to misclassify even slightly perturbed images at very high rates (Goodfellow et al., 2014; Papernot et al., 2016). In addition, deep networks can overfit to arbitrarily corrupted data (Zhang et al., 2016), and they are sensitive to small geometric transformations (Azulay & Weiss, 2018; Engstrom et al., 2017). These results have led to the important question about how the generalization gap (difference between train and test accuracy) of a deep network can be predicted using the training data and network parameters. Since in all of the above cases, the training loss is usually very small, it is clear that existing losses such as cross-entropy cannot serve that purpose. It has also been shown (e.g. in Zhang et al. (2016)) that regularizers such as weight decay cannot solve this problem either.
Consequently, a number of recent works (Neyshabur et al., 2017b; Kawaguchi et al., 2017; Bartlett et al., 2017; Poggio et al., 2017; Arora et al., 2018) have started to address this question, proposing generalization bounds based on analyses of network complexity or noise stability properties. However, a thorough empirical assessment of these bounds in terms of how accurately they can predict the generalization gap across various practical settings is not yet available.
In this work, we propose a new quantity for predicting generalization gap of a feedforward neural network. Using the notion of margin in support vector machines (Vapnik, 1995) and extension to deep networks (Elsayed et al., 2018), we develop a measure that shows a strong correlation with generalization gap and significantly outperforms recently developed theoretical bounds on
∗Work done as part of the Google AI Residency. Data and relevant code are at https://github.com/google-research/google-research/tree/master/demogen
generalization2. This is empirically shown by studying a wide range of deep networks trained on the CIFAR-10 and CIFAR-100 datasets. The measure presented in this paper may be useful for a constructing new loss functions with better generalization. Besides improvement in the prediction of the generalization gap, our work is distinct from recently developed bounds and margin definitions in a number of ways:
1. These recently developed bounds are typically functions of weight norms (such as the spectral, Frobenius or various mixed norms). Consequently, they cannot capture variations in network topology that are not reflected in the weight norms, e.g. adding residual connections (He et al., 2016) without careful additional engineering based on the topology changes. Furthermore, some of the bounds require specific treatment for nonlinear activations. Our proposed measure can handle any feedforward deep network.
2. Although some of these bounds involve margin, the margin is only defined and measured at the output layer (Bartlett et al., 2017; Neyshabur et al., 2017b). For a deep network, however, margin can be defined at any layer (Elsayed et al., 2018). We show that measuring margin at a single layer does not suffice to capture generalization gap. We argue that it is crucial to use margin information across layers and show that this significantly improves generalization gap prediction.
3. The common definition of margin, as used in the recent bounds e.g. Neyshabur et al. (2017b), or as extended to deep networks, is based on the closest distance of the training points to the decision boundary. However, this notion is brittle and sensitive to outliers. In contrast, we adopt margin distribution (Garg et al., 2002; Langford & Shawe-Taylor, 2002; Zhang & Zhou, 2017; 2018) by looking at the entire distribution of distances. This is shown to have far better prediction power.
4. We argue that the direct extension of margin definition to deep networks (Elsayed et al., 2018), although allowing margin to be defined on all layers of the model, is unable to capture
2In fairness, the theoretical bounds we compare against were designed to be provable upper bounds rather than estimates with low expected error. Nevertheless, since recent developments on characterizing the generalization gap of deep networks are in form of upper bounds, they form a reasonable baseline.
generalization gap without proper normalization. We propose a simple normalization scheme that significantly boosts prediction accuracy.
2 RELATED WORK
The recent seminal work of Zhang et al. (2016) has brought into focus the question of how generalization can be measured from training data. They showed that deep networks can easily learn to fit randomly labeled data with extremely high accuracy, but with arbitrarily low generalization capability. This overfitting is not countered by deploying commonly used regularizers. The work of Bartlett et al. (2017) proposes a measure based on the ratio of two quantities: the margin distribution measured at the output layer of the network; and a spectral complexity measure related to the network’s Lipschitz constant. Their normalized margin distribution provides a strong indication of the complexity of the learning task, e.g. the distribution is skewed towards the origin (lower normalized margin) for training with random labels. Neyshabur et al. (2017b;a) also develop bounds based on the product of norms of the weights across layers. Arora et al. (2018) develop bounds based on noise stability properties of networks: more stability implies better generalization. Using these criteria, they are able to derive stronger generalization bounds than previous works.
The margin distribution (specifically, boosting of margins across the training set) has been shown to correspond to generalization properties in the literature on linear models (Schapire et al., 1998): they used this connection to explain the effectiveness of boosting and bagging techniques. Reyzin & Schapire (2006) showed that it was important to control the complexity of a classifier when measuring margin, which calls for some type of normalization. In the linear case (SVM), margin is naturally defined as a function of norm of the weights Vapnik (1995). In the case of deep networks, true margin is intractable. Recent work (Elsayed et al., 2018) proposed a linearization to approximate the margin, and defined the margin at any layer of the network. (Sokolic et al., 2016) provide another approximation to the margin based on the norm of the Jacobian with respect to the input layer. They show that maximizing their approximations to the margin leads to improved generalization. However, their analysis was restricted to margin at the input layer. Poggio et al. (2017) and Liao et al. (2018) propose a normalized cross-entropy measure that correlates well with test loss. Their proposed normalized loss trades off confidence of predictions with stability, which leads to better correlation with test accuracy, leading to a significant lowering of output margin.
3 PREDICTION OF GENERALIZATION GAP
In this section, we introduce our margin-based measure. We first explain the construction scheme for obtaining the margin distribution. We then squeeze the distributional information of the margin to a small number of statistics. Finally, we regress these statistics to the value of the generalization gap. We assess prediction quality by applying the learned regression coefficients to predict the generalization gap of unseen models. We will start with providing a motivation for using the margins at the hidden layers which is supported by our empirical findings. SVM owes a large part of its success to the kernel that allows for inner product in a higher and richer feature space. At its crux, the primal kernel SVM problem is separated into the feature extractor and the classifier on the extracted features. We can separate any feed forward network at any given hidden layer and treat the hidden representation as a feature map. From this view, the layers that precede this hidden layer can be treated as a learned feature extractor and then the layers that come after are naturally the classifier. If the margins at the input layers or the output layers play important roles in generalization of the classifier, it is a natural conjecture that the margins at these hidden representations are also important in generalization. In fact, if we ignore the optimization procedure and focus on a converged network, generalization theories developed on the input such as Lv et al. (2019) can be easily extended to the hidden layers or the extracted features.
3.1 MARGIN APPROXIMATION
First, we establish some notation. Consider a classification problem with n classes. We assume a classifier f consists of non-linear functions fi : X → R, for i = 1, . . . , n that generate a prediction score for classifying the input vector x ∈ X to class i. The predicted label is decided by the class with maximal score, i.e. i∗ = arg maxi fi(x). Define the decision boundary for each class pair (i, j)
as: D(i,j) , {x | fi(x) = fj(x)} (1)
Under this definition, the lp distance of a point x to the decision boundary D(i,j) can be expressed as the smallest displacement of the point that results in a score tie:
df,x,(i,j) , min δ ‖δ‖p s.t. fi(x+ δ) = fj(x+ δ) (2)
Unlike an SVM, computing the “exact” distance of a point to the decision boundary (Eq. 2) for a deep network is intractable3. In this work, we adopt the approximation scheme from Elsayed et al. (2018) to capture the distance of a point to the decision boundary. This a first-order Taylor approximation to the true distance Eq. 2. Formally, given an input x to a network, denote its representation at the lth layer (the layer activation vector) by xl. For the input layer, let l = 0 and thus x0 = x. Then for p = 2, the distance of the representation vector xl to the decision boundary for class pair (i, j) is given by the following approximation:
df,(i,j)(x l) =
fi(x l)− fj(xl)
‖∇xlfi(xl)−∇xlfj(xl)‖2 (3)
Here fi(xl) represents the output (logit) of the network logit i given xl. Note that this distance can be positive or negative, denoting whether the training sample is on the “correct” or “wrong” side of the decision boundary respectively. This distance is well defined for all (i, j) pairs, but in this work we assume that i always refers to the ground truth label and j refers to the second highest or highest class (if the point is misclassified). The training data x induces a distribution of distances at each layer l which, following earlier naming convention (Garg et al., 2002; Langford & Shawe-Taylor, 2002), we refer to as margin distribution (at layer l). For margin distribution, we only consider distances with positive sign (we ignore all misclassified training points). Such design choice facilitates our empirical analysis when we transform our features (e.g. log transform); further, it has also been suggested that it may be possible to obtain a better generalization bound by only considering the correct examples when the classifier classifies a significant proportion of the training examples correctly, which is usually the case for neural networks (Bartlett, 1998). For completeness, the results with negative margins are included in appendix Sec. 7.
A problem with plain distances and their associated distribution is that they can be trivially boosted without any significant change in the way classifier separates the classes. For example, consider multiplying weights at a layer by a constant and dividing weights in the following layer by the same constant. In a ReLU network, due to positive homogeneity property (Liao et al., 2018), this operation does not affect how the network classifies a point, but it changes the distances to the decision boundary4. To offset the scaling effect, we normalize the margin distribution. Consider margin distribution at some layer l, and let xlk be the representation vector for training sample k. We compute the variance of each coordinate of {xlk} separately, and then sum these individual variances. This quantity is called total variation of xl. The square root of this quantity relates to the scale of the distribution. That is, if xl is scaled by a factor, so is the square root of the total variation. Thus, by dividing distances by the square root of total variation, we can construct a margin distribution invariant to scaling. More concretely, the total variation is computed as:
ν(xl) = tr ( 1 n n∑ k=1 (xlk − x̄l)(xlk − x̄l)T ) , x̄l = 1 n n∑ k=1 xlk , (4)
i.e. the trace of the empirical covariance matrix of activations. Using the total variation, the normalized margin is specified by:
d̂f,(i,j)(x l k) =
df,(i,j)(x l k)√
ν(xl) (5)
While the quantity is relatively primitive and easy to compute, Fig. 1 (top) shows that the normalizedmargin distributions based on Eq. 5 have the desirable effect of becoming heavier tailed and shifting
3This is because computing the distance of a point to a nonlinear surface is intractable. This is different from SVM where the surface is linear and distance of a point to a hyperplane admits a closed form expression.
4For example, suppose the constant c is greater that one. Then, multiplying the weights of a layer by c magnifies distances computed at the layer by a factor of c.
to the right (increasing margin) as generalization gap decreases. We find that this effect holds across a range of networks trained with different hyper-parameters.
3.2 SUMMARIZING THE MARGIN DISTRIBUTION
Instead of working directly with the (normalized) margin distribution, it is easier to analyze a compact signature of that. The moments of a distribution are a natural criterion for this purpose. Perhaps the most standard way of doing this is computing the empirical moments from the samples and then take the nth root of the nth moment. In our experiments, we used the first five moments. However, it is a well-known phenomenon that the estimation of higher order moments based on samples can be unreliable. Therefore, we also consider an alternate way to construct the distribution’s signature. Given a set of distances D = {d̂m}nm=1, which constitute the margin distribution. We use the median Q2, first quartile Q1 and third quartile Q3 of the normalized margin distribution, along with the two fences that indicate variability outside the upper and lower quartiles. There are many variations for fences, but in this work, with IQR = Q3 −Q1, we define the upper fence to be max({d̂m : d̂m ∈ D ∧ d̂m ≤ Q3 + 1.5IQR}) and the lower fence to be min({d̂m : d̂m ∈ D ∧ d̂m ≥ Q1 − 1.5IQR}) (McGill et al., 1978). These 5 statistics form the quartile description that summarizes the normalized margin distribution at a specific layer, as shown in the box plots of Fig. 1. We will later see that both signature representations are able to predict the generalization gap, with the second signature working slightly better.
A number of prior works such as Bartlett et al. (2017), Neyshabur et al. (2017b), Liu et al. (2016), Sun et al. (2015), Sokolic et al. (2016), and Liang et al. (2017) have focused on analyzing or maximizing the margin at either the input or the output layer of a deep network. Since a deep network has many hidden layers with evolving representations, it is not immediately clear which of the layer margins is of importance for improving generalization. Our experiments reveal that margin distribution from all of the layers of the network contribute to prediction of generalization gap. This is also clear from Fig. 1 (top): comparing the input layer (layer 0) margin distributions between the left and right plots, the input layer distribution shifts slightly left, but the other layer distributions shift the other way. For example, if we use quartile signature, we have 5L components in this vector, where L is the total number of layers in the network. We incorporate dependence on all layers simply by concatenating margin signatures of all layers into a single combined vector θ that we refer to as total signature. Empirically, we found constructing the total signature based on four evenly-spaced layers (input, and 3 hidden layers) sufficiently captures the variation in the distributions and generalization gap, and also makes the signature agnostic to the depth of the network.
3.3 EVALUATION METRICS
Our goal is to predict the generalization gap, i.e. the difference between training and test accuracy at the end of training, based on total signature θ of a trained model. We use the simplest prediction model, i.e. a linear form ĝ = aTφ(θ) + b, where a ∈ Rdim(θ) and b ∈ R are parameters of the predictor, and φ : R→ R is a function applied element-wise to θ. Specifically, we will explore two choices of φ: the identity φ(x) = x and entry-wise log transform φ(x) = log(x), which correspond to additive and multiplicative combination of margin statistics respectively. We do not claim this model is the true relation, but rather it is a simple model for prediction; and our results suggest that it is a surprisingly good approximation.
In order to estimate predictor parameters a, b, we generate a pool of n pretrained models (covering different datasets, architectures, regularization schemes, etc. as explained in Sec. 4) each of which gives one instance of the pair θ, g (g being the generalization gap for that model). We then find a, b by minimizing mean squared error: (a∗, b∗) = arg mina,b ∑ i (a
Tφ(θi) + b− gi)2 , where i indexes the ith model in the pool. The next step is to assess the prediction quality. We consider two metrics for this. The first metric examines quality of predictions on unseen models. For that, we consider a held-out pool of m models, different from those used to estimate (a, b), and compute the value of ĝ on them via ĝ = aTφ(θ) + b. In order to quantify the discrepancy between predicted gap ĝ and ground truth gap g we use the notion of coefficient of determination (R2) (Glantz et al., 1990):
R2 = 1− ∑n
j=1(ĝj − gj)2∑n j=1(gj − 1 n ∑n j=1 gj) 2 (6)
R2 measures what fraction of data variance can be explained by the linear model5 (it ranges from 0 to 1 on training points but can be outside that range on unseen points). To be precise, we use k-fold validation to study how the predictor can perform on held out pool of trained deep networks. We use 90/10 split, fit the linear model with the training pool, and measure R2 on the held out pool. The performance is averaged over the 10 splits. Since R2 is now not measured on the training pool, it does not suffer from high data dimension and can be negative. In all of our experiments, we use k = 10. We provide a subset of residual plots and corresponding univariate F-Test for the experiments in the appendix (Sec. 8). The F-score also indicates how important each individual variable is. The second metric examines how well the model fits based on the provided training pool; it does not require a test pool. To characterize this, we use adjusted R̄2 (Glantz et al., 1990) defined as:
R̄2 = 1− (1−R2) n− 1 n− dim(θ)− 1 . (7)
The R̄2 can be negative when the data is non-linear. Note that R̄2 is always smaller than R2. Intuitively, R̄2 penalizes the model if the number of features is high relative to the available data points. The closer R̄2 is to 1, the better the model fits. Using R̄2 is a simple yet effective method to test the fitness of linear model and is independent of the scale of the target, making it a more illustrative metric than residuals.
4 EXPERIMENTS
We tested our measure of generalization gap ĝ, along with baseline measures, on a number of deep networks and architectures: nine-layer convolutional networks on CIFAR-10 (10 with input layer), and 32-layer residual networks on both CIFAR-10 and CIFAR-100 datasets. The trained models and relevant Tensorflow Abadi et al. (2016) code to compute margin distributions are released at https://github.com/google-research/google-research/tree/master/demogen
4.1 CONVOLUTIONAL NEURAL NETWORKS ON CIFAR-10
Using the CIFAR-10 dataset, we train 216 nine-layer convolutional networks with different settings of hyperparameters and training techniques. We apply weight decay and dropout with different strengths; we use networks with and without batch norm and data augmentation; we change the number of hidden units in the hidden layers. Finally, we also include training with and without corrupted labels, as introduced in Zhang et al. (2016); we use a fixed amount of 20% corruption of the true labels. The accuracy on the test set ranges from 60% to 90.5% and the generalization gap ranges from 1% to 35%. In standard settings, creating neural network models with small generalization gap is difficult; in order to create sufficiently diverse generalization behaviors, we limit some models’ capacities by large weight regularization which decreases generalization gap by lowering the training accuracy. All networks are trained by SGD with momentum. Further details are provided in the supplementary material (Sec. 6).
For each trained network, we compute the depth-agnostic signature of the normalized margin distribution (see Sec. 3). This results in a 20-dimensional signature vector. We estimate the parameters of the linear predictor (a, b) with the log transform φ(x) = log(x) and using the 20-dimensional signature vector θ. Fig. 2 (left) shows the resulting scatter plot of the predicted generalization gap ĝ and the true generalization gap g. As it can be seen, it is very close to being linear across the range of generalization gaps, and this is also supported by the R̄2 of the model, which is 0.96 (max is 1).
As a first baseline method, we compare against the work of Bartlett et al. (2017) which provides one of the best generalization bounds currently known for deep networks. This work also constructs a margin distribution for the network, but in a different way. To make a fair comparison, we extract the same signature θ from their margin distribution. Since their margin distribution can only be defined for the output layer, their θ is 5-dimensional for any network. The resulting fit is shown in Fig. 2(right). It is clearly a poorer fit than that of our signature, with a significantly lower R̄2 of 0.72.
For a fairer comparison, we also reduced our signature θ from 20 dimensions to the best performing 4 dimensions (even one dimension less than what we used for Bartlett’s) by dropping 16 components in our θ. This is shown in Fig. 2 (middle) and has a R̄2 of 0.89, which is poorer than our complete
5 A simple manipulation shows that the prediction residual ∑m
j=1(ĝj − gj) 2 ∝ 1 − R2, so R2 can be
interpreted as a scale invariant alternative to the residual.
θ but still significantly higher than that of Bartlett et al. (2017). In addition, we considered two other baseline comparisons: Sokolic et al. (2016), where margin at input is defined as a function of the Jacobian of output (logits) with respect to input; and Elsayed et al. (2018) where the linearized approximation to margin is derived (for the same layers where we use our normalized margin approximation).
To quantify the effect of the normalization, different layers, feature transformation etc., we conduct a number of ablation experiments with the following configuration: 1. linear/log: Use signature transform of φ(x) = x or φ(x) = log(x); 2. sl: Use signature from the single best layer (θ ∈ R5); 3. sf: Use only the single best statistic from the total signature for all the layers (θ ∈ R4, individual layer result can be found in Sec. 7); 4. moment: Use the first 5 moments of the normalized margin distribution as signature instead of quartile statistics θ ∈ R20 (Sec. 3); 5. spectral: Use signature of spectrally normalized margins from Bartlett et al. (2017) (θ ∈ R5); 6. qrt: Use all the quartile statistics as total signature θ ∈ R20 (Sec. 3); 7. best4: Use the 4 best statistics from the total signature (θ ∈ R4); 8. Jacobian: Use the Jacobian-based margin defined in Eq (39) of Sokolic et al. (2016) (θ ∈ R5); 9. LM: Use the large margin loss from Elsayed et al. (2018) at the same four layers where the statistics are measured (θ ∈ R4); 10. unnorm indicates no normalization. In Table 1, we list the R̄2 from fitting models based on each of these scenarios. We see that, both quartile and moment signatures perform similarly, lending support to our thesis that the margin distribution, rather than the smallest or largest margin, is of importance in the context of generalization.
4.2 RESIDUAL NETWORKS ON CIFAR-10
On the CIFAR-10 dataset, we train 216 convolutional networks with residual connections; these networks are 32 layers deep with standard ResNet 32 topology (He et al., 2016). Since it is difficult to train ResNet without activation normalization, we created generalization gap variation with batch
normalization (Ioffe & Szegedy, 2015) and group normalization (Wu & He, 2018). We further use different initial learning rates. The range of accuracy on the test set ranges from 83% to 93.5% and generalization gap from 6% to 13.5%. The residual networks were much deeper, and so we only chose 4 layers for feature-length compatibility with the shallower convoluational networks. This design choice also facilitates ease of analysis and circumvents the dependency on depth of the models. Table 1 shows the R̄2. Note in the presence of residual connections that use convolution instead of identity and identity blocks that span more than one convolutional layers, it is not immediately clear how to properly apply the bounds of Bartlett et al. (2017) (third from last row) without morphing the topology of the architecture and careful design of reference matrices. As such, we omit them for ResNet. Fig. 3 (left) shows the fit for the resnet models, with R̄2 = 0.87. Fig. 3 (middle) and Fig. 3 (right) compare the log normalized density plots of a CIFAR-10 resnet and CIFAR-10 CNN. The plots show that the Resnet achieves a better margin distribution, correlated with greater test accuracy, even though it was trained without data augmentation.
4.3 RESNET ON CIFAR-100
On the CIFAR-100 dataset, we trained 324 ResNet-32 with the same variation in hyperparameter settings as for the networks for CIFAR-10 with one additional initial learning rate. The range of accuracy on the test set ranges from 12% to 73% and the generalization gap ranges from 1% to 75%. Table 1 shows R̄2 for a number of ablation experiments and the full feature set. Fig. 4 (left) shows the fit of predicted and true generalization gaps over the networks (R̄2 = 0.97). Fig. 4 (middle) and Fig. 4 (right) compare a CIFAR-100 residual network and a CIFAR-10 residual network with the same architecture and hyperparameters. Under these settings, the CIFAR-100 network achieves 44% test accuracy, whereas CIFAR-10 achieves 61%. The resulting normalized margin density plots clearly reflect the better generalization achieved by CIFAR-10: the densities at all layers are wider and shifted to the right. Thus, the normalized margin distributions reflect the relative “difficulty” of a particular dataset for a given architecture.
5 DISCUSSION
We have presented a predictor for generalization gap based on margin distribution in deep networks and conducted extensive experiments to assess it. Our results show that our scheme achieves a high adjusted coefficient of determination (a linear regression predicts generalization gap accurately). Specifically, the predictor uses normalized margin distribution across multiple layers of the network. The best predictor uses quartiles of the distribution combined in multiplicative way (additive in log transform). Compared to the strong baseline of spectral complexity normalized output margin (Bartlett et al., 2017), our scheme exhibits much higher predictive power and can be applied to any feedforward network (including ResNets, unlike generalization bounds such as (Bartlett et al., 2017; Neyshabur et al., 2017b; Arora et al., 2018)). We also find that using hidden layers is crucial for the predictive power. Our findings could be a stepping stone for studying new generalization theories and
new loss functions with better generalization properties. We include the results on cross architecture and cross data comparison as well as some final thoughts in Appendix Sec. 9.
ACKNOWLEDGMENTS
We are thankful to Gamaleldin Elsayed (Google), Tomer Koren (Google), Sergey Ioffe (Google), Vighnesh Birodkar (Google), Shraman Ray Chaudhuri (Google), Kevin Regan (Google), Behnam Neyshabur (NYU), and Dylan Foster (Cornell), for discussions and helpful feedbacks.
6 APPENDIX: EXPERIMENTAL DETAILS
6.1 CNN + CIFAR-10
We use an architecture very similar to Network in Network (Lin et al. (2013)), but we remove all dropout and max pool from the network.
To create generalization gap in this model, we make the following modification to the base architecture:
1. Use channel size of 192, 288, and 384 to create different width
2. Train with and without batch norm at all convolutional layers
3. Apply dropout at layer 3 and 6 with p = 0.0, 0.2, 0.5
4. Apply l2 regularization with λ = 0.0, 0.001, 0.005
5. Train with and without data augmentation with random cropping, flipping and shifting
6. Train each configuration twice
In total this gives us 3× 2× 3× 3× 2× 2 = 216 different network architectures. The models are trained with SGD with momentum (α = 0.9) at minibatch size of 128 and intial learning rate of 0.01. All networks are trained for 380 epoch with 10× learning rate decay at interval of 100 epoch.
6.2 RESNET 32 + CIFAR-10
For this experiments, we use the standard ResNet 32 architectures. We consider down sampling to the marker of a stage, so there are in total 3 stages in the ResNet 32 architecture. To create generalization gap in this model, we make the following modifications to the architecture:
1. Use network width that are 1×, 2×, 4× wider in number of channels. 2. Train with batch norm or group norm (Wu & He, 2018)
3. Train with initial learning rate of 0.01, 0.001
4. Apply l2 regularization with λ = 0.0, 0.02, 0.002
5. Trian with and without data augmentation with random cropping, flipping and shifting
6. Train each configuration 3 times
In total this gives us 3× 2× 2× 3× 2× 3 = 216 different network architectures. The models are trained with SGD with momentum (α = 0.9) at minibatch size of 128. All networks are trained for 380 epoch with 10× learning rate decay at interval of 100 epoch.
6.3 RESNET 32 + CIFAR-100
For this experiments, we use the standard ResNet 32 architectures. We consider down sampling to the marker of a stage, so there are in total 3 stages in the ResNet 32 architecture. To create generalization gap in this model, we make the following modifications to the architecture:
1. Use network width that are 1×, 2×, 4× wider in number of channels. 2. Train with batch norm or group norm (Wu & He, 2018) 3. Train with initial learning rate of 0.1, 0.01, 0.001 4. Apply l2 regularization with λ = 0.0, 0.02, 0.002 5. Trian with and without data augmentation with random cropping, flipping and shifting 6. Train each configuration 3 times
In total this gives us 3× 2× 3× 3× 2× 3 = 324 different network architectures. The models are trained with SGD with momentum (α = 0.9) at minibatch size of 128. All networks are trained for 380 epoch with 10× learning rate decay at interval of 100 epoch.
7 APPENDIX: MORE REGRESSION RESULTS
7.1 ANALYSIS WITH NEGATIVE MARGINS
The last two rows contain the results of including the negative margins and regress against both the gap (generalization gap) and against acc (test accuracy). We see that including when negative margin is included, it is in general easier to predict the accuracy of the models rather than the gap itself. For convenience, we have reproduced Table 1.
7.2 ANALYSIS FOR INDIVIDUAL LAYER’S MARGIN DISTRIBUTIONS
This is a comparison of different individual layer’s predictive power by using only the margin distribution at that layer. This results illustrates the importance of margins in the hidden layers.
8.1 CNN + CIFAR-10 + ALL QUARTILE SIGNATURE
8 APPENDIX: FURTHER ANALYSIS OF REGRESSION
8.2 RESNET 32 + CIFAR-10 + ALL QUARTILE SIGNATURE
8.3 RESNET 32 + CIFAR-100 + ALL QUARTILE SIGNATURE
9 APPENDIX: SOME OBSERVATIONS AND CONJECTURES
Everythig here uses the full quartile description.
9.1 CROSS ARCHITECTURE COMPARISON
We perform regression analysis with both base CNN and ResNet32 on CIFAR-10. The resulting R̄2 = 0.91 and the k-fold R2 = 0.88. This suggests that the same coefficient works generally well across architectures provided they are trained on the same data. Somehow, the distribution at the 3 locations of the networks are comparable even though the depths are vastly different.
9.2 CROSS DATASET COMPARISON
We perform regression analysis with ResNet32 on both CIFAR-10 and CIFAR-100. The resulting R̄2 = 0.96 and the k-fold R2 = 0.95. This suggests that the same coefficient works generally well across dataset of the same architecture.
9.3 CROSS EVERYTHING
We join all our experiment data and the resulting The resulting R̄2 = 0.93 and the k-fold R2 = 0.93. It is perhaps surprising that a set of coefficient exists across both datasets and architectures.
9.4 IMPLICATIONS ON GENERALIZATION BOUNDS
We believe that the method developed here can be used in complementary with existing generalization bound; more sophisticated engineering of the predictor may be used to actually verify what kind of function the generalization bound should look like up to constant factor or exponents; it may be helpful for developing generalization bound tighter than the existing ones. | 1. What are the pros and cons of using geometric margin and layer-wise margin distribution in predicting generalization gaps?
2. Can the author provide theoretical verification or convincing intuition for the benefits of using geometric margin defined in the paper?
3. Why does normalization make sense beyond the simple scaling-free reason, and how does it relate to spectral complexity as a normalization factor in Bartlett et al. (2017)?
4. How does the middle layer margin help in predicting generalization gaps?
5. Why does a linear (linear log) relation exist between the statistic and generalization gap?
6. Is it fair to compare the results with Bartlett's work, given that their bounds suggest a different formula for the generalization gap?
7. Can the author provide an explanation for why using the extracted signature from margin distribution and a linear predictor might not make sense in certain cases?
8. How well can the model predict the generalization gap for a nine-layer CNN or a residual CNN?
9. Does the novelty of the paper suffice, considering that the margin definition comes from Elsayed et al. (2018), and a similar linear relationship has been shown in other works such as Bartlett et al. (2017) and Liao et al. (2018)? | Review | Review
The author(s) suggest using geometric margin and layer-wise margin distribution in [Elsayed et al. 2018] for predicting generalization gap.
pros,
a). The author shows large experiments to support their argument.
cons,
a). No theoretical verification (nor convincing intuition) is provided, especially for the following questions,
i) what benefit can be acquired when using geometric margin defined in the paper.
ii) why does normalization make sense beyond the simple scaling-free reason. For example, spectral complexity as a normalization factor in [Bartlett et al. 2017] is proposed from the fact, that the Lipschitz constant determines the complexity of network space.
iii) why does the middle layer margin can help?
iv) why a linear (linear log) relation between the statistic and generalization gap.
Further question towards experiment,
i) I don't think your comparison with Bartlett's work is fair. Their bounds suggest the gap is approximately Prob(0<X<\gamma) + Const/\gamma for a chosen \gamma, where X is the normalized margin distribution. I think using the extracted signature from margin distribution and a linear predictor don't make sense here.
ii) If you do regression analysis on a five layers cnn, can you have a good prediction on a nine layers cnn (or even residue cnn)?
Finally, I'm not sure the novelty is strong enough since the margin definition comes from [Elsayed et al. 2018] and the strong linear relationship has been shown in [Bartlett et al. 2017, Liao et al. 2018] though in different settings. |
ICLR | Title
Predicting the Generalization Gap in Deep Networks with Margin Distributions
Abstract
Recent research has demonstrated that deep neural networks can perfectly fit randomly labeled data, but with very poor accuracy on held out data. This phenomenon indicates that loss functions such as cross-entropy are not a reliable indicator of generalization. This leads to the crucial question of how generalization gap can be predicted from training data and network parameters. In this paper, we propose such a measure, and conduct extensive empirical studies on how well it can predict the generalization gap. Our measure is based on the concept of margin distribution, which are the distances of training points to the decision boundary. We find that it is necessary to use margin distributions at multiple layers of a deep network. On the CIFAR-10 and the CIFAR-100 datasets, our proposed measure correlates very strongly with the generalization gap. In addition, we find the following other factors to be of importance: normalizing margin values for scale independence, using characterizations of margin distribution rather than just the margin (closest distance to decision boundary), and working in log space instead of linear space (effectively using a product of margins rather than a sum). Our measure can be easily applied to feedforward deep networks with any architecture and may point towards new training loss functions that could enable better generalization.
1 INTRODUCTION
Generalization, the ability of a classifier to perform well on unseen examples, is a desideratum for progress towards real-world deployment of deep neural networks in domains such as autonomous cars and healthcare. Until recently, it was commonly believed that deep networks generalize well to unseen examples. This was based on empirical evidence about performance on held-out dataset. However, new research has started to question this assumption. Adversarial examples cause networks to misclassify even slightly perturbed images at very high rates (Goodfellow et al., 2014; Papernot et al., 2016). In addition, deep networks can overfit to arbitrarily corrupted data (Zhang et al., 2016), and they are sensitive to small geometric transformations (Azulay & Weiss, 2018; Engstrom et al., 2017). These results have led to the important question about how the generalization gap (difference between train and test accuracy) of a deep network can be predicted using the training data and network parameters. Since in all of the above cases, the training loss is usually very small, it is clear that existing losses such as cross-entropy cannot serve that purpose. It has also been shown (e.g. in Zhang et al. (2016)) that regularizers such as weight decay cannot solve this problem either.
Consequently, a number of recent works (Neyshabur et al., 2017b; Kawaguchi et al., 2017; Bartlett et al., 2017; Poggio et al., 2017; Arora et al., 2018) have started to address this question, proposing generalization bounds based on analyses of network complexity or noise stability properties. However, a thorough empirical assessment of these bounds in terms of how accurately they can predict the generalization gap across various practical settings is not yet available.
In this work, we propose a new quantity for predicting generalization gap of a feedforward neural network. Using the notion of margin in support vector machines (Vapnik, 1995) and extension to deep networks (Elsayed et al., 2018), we develop a measure that shows a strong correlation with generalization gap and significantly outperforms recently developed theoretical bounds on
∗Work done as part of the Google AI Residency. Data and relevant code are at https://github.com/google-research/google-research/tree/master/demogen
generalization2. This is empirically shown by studying a wide range of deep networks trained on the CIFAR-10 and CIFAR-100 datasets. The measure presented in this paper may be useful for a constructing new loss functions with better generalization. Besides improvement in the prediction of the generalization gap, our work is distinct from recently developed bounds and margin definitions in a number of ways:
1. These recently developed bounds are typically functions of weight norms (such as the spectral, Frobenius or various mixed norms). Consequently, they cannot capture variations in network topology that are not reflected in the weight norms, e.g. adding residual connections (He et al., 2016) without careful additional engineering based on the topology changes. Furthermore, some of the bounds require specific treatment for nonlinear activations. Our proposed measure can handle any feedforward deep network.
2. Although some of these bounds involve margin, the margin is only defined and measured at the output layer (Bartlett et al., 2017; Neyshabur et al., 2017b). For a deep network, however, margin can be defined at any layer (Elsayed et al., 2018). We show that measuring margin at a single layer does not suffice to capture generalization gap. We argue that it is crucial to use margin information across layers and show that this significantly improves generalization gap prediction.
3. The common definition of margin, as used in the recent bounds e.g. Neyshabur et al. (2017b), or as extended to deep networks, is based on the closest distance of the training points to the decision boundary. However, this notion is brittle and sensitive to outliers. In contrast, we adopt margin distribution (Garg et al., 2002; Langford & Shawe-Taylor, 2002; Zhang & Zhou, 2017; 2018) by looking at the entire distribution of distances. This is shown to have far better prediction power.
4. We argue that the direct extension of margin definition to deep networks (Elsayed et al., 2018), although allowing margin to be defined on all layers of the model, is unable to capture
2In fairness, the theoretical bounds we compare against were designed to be provable upper bounds rather than estimates with low expected error. Nevertheless, since recent developments on characterizing the generalization gap of deep networks are in form of upper bounds, they form a reasonable baseline.
generalization gap without proper normalization. We propose a simple normalization scheme that significantly boosts prediction accuracy.
2 RELATED WORK
The recent seminal work of Zhang et al. (2016) has brought into focus the question of how generalization can be measured from training data. They showed that deep networks can easily learn to fit randomly labeled data with extremely high accuracy, but with arbitrarily low generalization capability. This overfitting is not countered by deploying commonly used regularizers. The work of Bartlett et al. (2017) proposes a measure based on the ratio of two quantities: the margin distribution measured at the output layer of the network; and a spectral complexity measure related to the network’s Lipschitz constant. Their normalized margin distribution provides a strong indication of the complexity of the learning task, e.g. the distribution is skewed towards the origin (lower normalized margin) for training with random labels. Neyshabur et al. (2017b;a) also develop bounds based on the product of norms of the weights across layers. Arora et al. (2018) develop bounds based on noise stability properties of networks: more stability implies better generalization. Using these criteria, they are able to derive stronger generalization bounds than previous works.
The margin distribution (specifically, boosting of margins across the training set) has been shown to correspond to generalization properties in the literature on linear models (Schapire et al., 1998): they used this connection to explain the effectiveness of boosting and bagging techniques. Reyzin & Schapire (2006) showed that it was important to control the complexity of a classifier when measuring margin, which calls for some type of normalization. In the linear case (SVM), margin is naturally defined as a function of norm of the weights Vapnik (1995). In the case of deep networks, true margin is intractable. Recent work (Elsayed et al., 2018) proposed a linearization to approximate the margin, and defined the margin at any layer of the network. (Sokolic et al., 2016) provide another approximation to the margin based on the norm of the Jacobian with respect to the input layer. They show that maximizing their approximations to the margin leads to improved generalization. However, their analysis was restricted to margin at the input layer. Poggio et al. (2017) and Liao et al. (2018) propose a normalized cross-entropy measure that correlates well with test loss. Their proposed normalized loss trades off confidence of predictions with stability, which leads to better correlation with test accuracy, leading to a significant lowering of output margin.
3 PREDICTION OF GENERALIZATION GAP
In this section, we introduce our margin-based measure. We first explain the construction scheme for obtaining the margin distribution. We then squeeze the distributional information of the margin to a small number of statistics. Finally, we regress these statistics to the value of the generalization gap. We assess prediction quality by applying the learned regression coefficients to predict the generalization gap of unseen models. We will start with providing a motivation for using the margins at the hidden layers which is supported by our empirical findings. SVM owes a large part of its success to the kernel that allows for inner product in a higher and richer feature space. At its crux, the primal kernel SVM problem is separated into the feature extractor and the classifier on the extracted features. We can separate any feed forward network at any given hidden layer and treat the hidden representation as a feature map. From this view, the layers that precede this hidden layer can be treated as a learned feature extractor and then the layers that come after are naturally the classifier. If the margins at the input layers or the output layers play important roles in generalization of the classifier, it is a natural conjecture that the margins at these hidden representations are also important in generalization. In fact, if we ignore the optimization procedure and focus on a converged network, generalization theories developed on the input such as Lv et al. (2019) can be easily extended to the hidden layers or the extracted features.
3.1 MARGIN APPROXIMATION
First, we establish some notation. Consider a classification problem with n classes. We assume a classifier f consists of non-linear functions fi : X → R, for i = 1, . . . , n that generate a prediction score for classifying the input vector x ∈ X to class i. The predicted label is decided by the class with maximal score, i.e. i∗ = arg maxi fi(x). Define the decision boundary for each class pair (i, j)
as: D(i,j) , {x | fi(x) = fj(x)} (1)
Under this definition, the lp distance of a point x to the decision boundary D(i,j) can be expressed as the smallest displacement of the point that results in a score tie:
df,x,(i,j) , min δ ‖δ‖p s.t. fi(x+ δ) = fj(x+ δ) (2)
Unlike an SVM, computing the “exact” distance of a point to the decision boundary (Eq. 2) for a deep network is intractable3. In this work, we adopt the approximation scheme from Elsayed et al. (2018) to capture the distance of a point to the decision boundary. This a first-order Taylor approximation to the true distance Eq. 2. Formally, given an input x to a network, denote its representation at the lth layer (the layer activation vector) by xl. For the input layer, let l = 0 and thus x0 = x. Then for p = 2, the distance of the representation vector xl to the decision boundary for class pair (i, j) is given by the following approximation:
df,(i,j)(x l) =
fi(x l)− fj(xl)
‖∇xlfi(xl)−∇xlfj(xl)‖2 (3)
Here fi(xl) represents the output (logit) of the network logit i given xl. Note that this distance can be positive or negative, denoting whether the training sample is on the “correct” or “wrong” side of the decision boundary respectively. This distance is well defined for all (i, j) pairs, but in this work we assume that i always refers to the ground truth label and j refers to the second highest or highest class (if the point is misclassified). The training data x induces a distribution of distances at each layer l which, following earlier naming convention (Garg et al., 2002; Langford & Shawe-Taylor, 2002), we refer to as margin distribution (at layer l). For margin distribution, we only consider distances with positive sign (we ignore all misclassified training points). Such design choice facilitates our empirical analysis when we transform our features (e.g. log transform); further, it has also been suggested that it may be possible to obtain a better generalization bound by only considering the correct examples when the classifier classifies a significant proportion of the training examples correctly, which is usually the case for neural networks (Bartlett, 1998). For completeness, the results with negative margins are included in appendix Sec. 7.
A problem with plain distances and their associated distribution is that they can be trivially boosted without any significant change in the way classifier separates the classes. For example, consider multiplying weights at a layer by a constant and dividing weights in the following layer by the same constant. In a ReLU network, due to positive homogeneity property (Liao et al., 2018), this operation does not affect how the network classifies a point, but it changes the distances to the decision boundary4. To offset the scaling effect, we normalize the margin distribution. Consider margin distribution at some layer l, and let xlk be the representation vector for training sample k. We compute the variance of each coordinate of {xlk} separately, and then sum these individual variances. This quantity is called total variation of xl. The square root of this quantity relates to the scale of the distribution. That is, if xl is scaled by a factor, so is the square root of the total variation. Thus, by dividing distances by the square root of total variation, we can construct a margin distribution invariant to scaling. More concretely, the total variation is computed as:
ν(xl) = tr ( 1 n n∑ k=1 (xlk − x̄l)(xlk − x̄l)T ) , x̄l = 1 n n∑ k=1 xlk , (4)
i.e. the trace of the empirical covariance matrix of activations. Using the total variation, the normalized margin is specified by:
d̂f,(i,j)(x l k) =
df,(i,j)(x l k)√
ν(xl) (5)
While the quantity is relatively primitive and easy to compute, Fig. 1 (top) shows that the normalizedmargin distributions based on Eq. 5 have the desirable effect of becoming heavier tailed and shifting
3This is because computing the distance of a point to a nonlinear surface is intractable. This is different from SVM where the surface is linear and distance of a point to a hyperplane admits a closed form expression.
4For example, suppose the constant c is greater that one. Then, multiplying the weights of a layer by c magnifies distances computed at the layer by a factor of c.
to the right (increasing margin) as generalization gap decreases. We find that this effect holds across a range of networks trained with different hyper-parameters.
3.2 SUMMARIZING THE MARGIN DISTRIBUTION
Instead of working directly with the (normalized) margin distribution, it is easier to analyze a compact signature of that. The moments of a distribution are a natural criterion for this purpose. Perhaps the most standard way of doing this is computing the empirical moments from the samples and then take the nth root of the nth moment. In our experiments, we used the first five moments. However, it is a well-known phenomenon that the estimation of higher order moments based on samples can be unreliable. Therefore, we also consider an alternate way to construct the distribution’s signature. Given a set of distances D = {d̂m}nm=1, which constitute the margin distribution. We use the median Q2, first quartile Q1 and third quartile Q3 of the normalized margin distribution, along with the two fences that indicate variability outside the upper and lower quartiles. There are many variations for fences, but in this work, with IQR = Q3 −Q1, we define the upper fence to be max({d̂m : d̂m ∈ D ∧ d̂m ≤ Q3 + 1.5IQR}) and the lower fence to be min({d̂m : d̂m ∈ D ∧ d̂m ≥ Q1 − 1.5IQR}) (McGill et al., 1978). These 5 statistics form the quartile description that summarizes the normalized margin distribution at a specific layer, as shown in the box plots of Fig. 1. We will later see that both signature representations are able to predict the generalization gap, with the second signature working slightly better.
A number of prior works such as Bartlett et al. (2017), Neyshabur et al. (2017b), Liu et al. (2016), Sun et al. (2015), Sokolic et al. (2016), and Liang et al. (2017) have focused on analyzing or maximizing the margin at either the input or the output layer of a deep network. Since a deep network has many hidden layers with evolving representations, it is not immediately clear which of the layer margins is of importance for improving generalization. Our experiments reveal that margin distribution from all of the layers of the network contribute to prediction of generalization gap. This is also clear from Fig. 1 (top): comparing the input layer (layer 0) margin distributions between the left and right plots, the input layer distribution shifts slightly left, but the other layer distributions shift the other way. For example, if we use quartile signature, we have 5L components in this vector, where L is the total number of layers in the network. We incorporate dependence on all layers simply by concatenating margin signatures of all layers into a single combined vector θ that we refer to as total signature. Empirically, we found constructing the total signature based on four evenly-spaced layers (input, and 3 hidden layers) sufficiently captures the variation in the distributions and generalization gap, and also makes the signature agnostic to the depth of the network.
3.3 EVALUATION METRICS
Our goal is to predict the generalization gap, i.e. the difference between training and test accuracy at the end of training, based on total signature θ of a trained model. We use the simplest prediction model, i.e. a linear form ĝ = aTφ(θ) + b, where a ∈ Rdim(θ) and b ∈ R are parameters of the predictor, and φ : R→ R is a function applied element-wise to θ. Specifically, we will explore two choices of φ: the identity φ(x) = x and entry-wise log transform φ(x) = log(x), which correspond to additive and multiplicative combination of margin statistics respectively. We do not claim this model is the true relation, but rather it is a simple model for prediction; and our results suggest that it is a surprisingly good approximation.
In order to estimate predictor parameters a, b, we generate a pool of n pretrained models (covering different datasets, architectures, regularization schemes, etc. as explained in Sec. 4) each of which gives one instance of the pair θ, g (g being the generalization gap for that model). We then find a, b by minimizing mean squared error: (a∗, b∗) = arg mina,b ∑ i (a
Tφ(θi) + b− gi)2 , where i indexes the ith model in the pool. The next step is to assess the prediction quality. We consider two metrics for this. The first metric examines quality of predictions on unseen models. For that, we consider a held-out pool of m models, different from those used to estimate (a, b), and compute the value of ĝ on them via ĝ = aTφ(θ) + b. In order to quantify the discrepancy between predicted gap ĝ and ground truth gap g we use the notion of coefficient of determination (R2) (Glantz et al., 1990):
R2 = 1− ∑n
j=1(ĝj − gj)2∑n j=1(gj − 1 n ∑n j=1 gj) 2 (6)
R2 measures what fraction of data variance can be explained by the linear model5 (it ranges from 0 to 1 on training points but can be outside that range on unseen points). To be precise, we use k-fold validation to study how the predictor can perform on held out pool of trained deep networks. We use 90/10 split, fit the linear model with the training pool, and measure R2 on the held out pool. The performance is averaged over the 10 splits. Since R2 is now not measured on the training pool, it does not suffer from high data dimension and can be negative. In all of our experiments, we use k = 10. We provide a subset of residual plots and corresponding univariate F-Test for the experiments in the appendix (Sec. 8). The F-score also indicates how important each individual variable is. The second metric examines how well the model fits based on the provided training pool; it does not require a test pool. To characterize this, we use adjusted R̄2 (Glantz et al., 1990) defined as:
R̄2 = 1− (1−R2) n− 1 n− dim(θ)− 1 . (7)
The R̄2 can be negative when the data is non-linear. Note that R̄2 is always smaller than R2. Intuitively, R̄2 penalizes the model if the number of features is high relative to the available data points. The closer R̄2 is to 1, the better the model fits. Using R̄2 is a simple yet effective method to test the fitness of linear model and is independent of the scale of the target, making it a more illustrative metric than residuals.
4 EXPERIMENTS
We tested our measure of generalization gap ĝ, along with baseline measures, on a number of deep networks and architectures: nine-layer convolutional networks on CIFAR-10 (10 with input layer), and 32-layer residual networks on both CIFAR-10 and CIFAR-100 datasets. The trained models and relevant Tensorflow Abadi et al. (2016) code to compute margin distributions are released at https://github.com/google-research/google-research/tree/master/demogen
4.1 CONVOLUTIONAL NEURAL NETWORKS ON CIFAR-10
Using the CIFAR-10 dataset, we train 216 nine-layer convolutional networks with different settings of hyperparameters and training techniques. We apply weight decay and dropout with different strengths; we use networks with and without batch norm and data augmentation; we change the number of hidden units in the hidden layers. Finally, we also include training with and without corrupted labels, as introduced in Zhang et al. (2016); we use a fixed amount of 20% corruption of the true labels. The accuracy on the test set ranges from 60% to 90.5% and the generalization gap ranges from 1% to 35%. In standard settings, creating neural network models with small generalization gap is difficult; in order to create sufficiently diverse generalization behaviors, we limit some models’ capacities by large weight regularization which decreases generalization gap by lowering the training accuracy. All networks are trained by SGD with momentum. Further details are provided in the supplementary material (Sec. 6).
For each trained network, we compute the depth-agnostic signature of the normalized margin distribution (see Sec. 3). This results in a 20-dimensional signature vector. We estimate the parameters of the linear predictor (a, b) with the log transform φ(x) = log(x) and using the 20-dimensional signature vector θ. Fig. 2 (left) shows the resulting scatter plot of the predicted generalization gap ĝ and the true generalization gap g. As it can be seen, it is very close to being linear across the range of generalization gaps, and this is also supported by the R̄2 of the model, which is 0.96 (max is 1).
As a first baseline method, we compare against the work of Bartlett et al. (2017) which provides one of the best generalization bounds currently known for deep networks. This work also constructs a margin distribution for the network, but in a different way. To make a fair comparison, we extract the same signature θ from their margin distribution. Since their margin distribution can only be defined for the output layer, their θ is 5-dimensional for any network. The resulting fit is shown in Fig. 2(right). It is clearly a poorer fit than that of our signature, with a significantly lower R̄2 of 0.72.
For a fairer comparison, we also reduced our signature θ from 20 dimensions to the best performing 4 dimensions (even one dimension less than what we used for Bartlett’s) by dropping 16 components in our θ. This is shown in Fig. 2 (middle) and has a R̄2 of 0.89, which is poorer than our complete
5 A simple manipulation shows that the prediction residual ∑m
j=1(ĝj − gj) 2 ∝ 1 − R2, so R2 can be
interpreted as a scale invariant alternative to the residual.
θ but still significantly higher than that of Bartlett et al. (2017). In addition, we considered two other baseline comparisons: Sokolic et al. (2016), where margin at input is defined as a function of the Jacobian of output (logits) with respect to input; and Elsayed et al. (2018) where the linearized approximation to margin is derived (for the same layers where we use our normalized margin approximation).
To quantify the effect of the normalization, different layers, feature transformation etc., we conduct a number of ablation experiments with the following configuration: 1. linear/log: Use signature transform of φ(x) = x or φ(x) = log(x); 2. sl: Use signature from the single best layer (θ ∈ R5); 3. sf: Use only the single best statistic from the total signature for all the layers (θ ∈ R4, individual layer result can be found in Sec. 7); 4. moment: Use the first 5 moments of the normalized margin distribution as signature instead of quartile statistics θ ∈ R20 (Sec. 3); 5. spectral: Use signature of spectrally normalized margins from Bartlett et al. (2017) (θ ∈ R5); 6. qrt: Use all the quartile statistics as total signature θ ∈ R20 (Sec. 3); 7. best4: Use the 4 best statistics from the total signature (θ ∈ R4); 8. Jacobian: Use the Jacobian-based margin defined in Eq (39) of Sokolic et al. (2016) (θ ∈ R5); 9. LM: Use the large margin loss from Elsayed et al. (2018) at the same four layers where the statistics are measured (θ ∈ R4); 10. unnorm indicates no normalization. In Table 1, we list the R̄2 from fitting models based on each of these scenarios. We see that, both quartile and moment signatures perform similarly, lending support to our thesis that the margin distribution, rather than the smallest or largest margin, is of importance in the context of generalization.
4.2 RESIDUAL NETWORKS ON CIFAR-10
On the CIFAR-10 dataset, we train 216 convolutional networks with residual connections; these networks are 32 layers deep with standard ResNet 32 topology (He et al., 2016). Since it is difficult to train ResNet without activation normalization, we created generalization gap variation with batch
normalization (Ioffe & Szegedy, 2015) and group normalization (Wu & He, 2018). We further use different initial learning rates. The range of accuracy on the test set ranges from 83% to 93.5% and generalization gap from 6% to 13.5%. The residual networks were much deeper, and so we only chose 4 layers for feature-length compatibility with the shallower convoluational networks. This design choice also facilitates ease of analysis and circumvents the dependency on depth of the models. Table 1 shows the R̄2. Note in the presence of residual connections that use convolution instead of identity and identity blocks that span more than one convolutional layers, it is not immediately clear how to properly apply the bounds of Bartlett et al. (2017) (third from last row) without morphing the topology of the architecture and careful design of reference matrices. As such, we omit them for ResNet. Fig. 3 (left) shows the fit for the resnet models, with R̄2 = 0.87. Fig. 3 (middle) and Fig. 3 (right) compare the log normalized density plots of a CIFAR-10 resnet and CIFAR-10 CNN. The plots show that the Resnet achieves a better margin distribution, correlated with greater test accuracy, even though it was trained without data augmentation.
4.3 RESNET ON CIFAR-100
On the CIFAR-100 dataset, we trained 324 ResNet-32 with the same variation in hyperparameter settings as for the networks for CIFAR-10 with one additional initial learning rate. The range of accuracy on the test set ranges from 12% to 73% and the generalization gap ranges from 1% to 75%. Table 1 shows R̄2 for a number of ablation experiments and the full feature set. Fig. 4 (left) shows the fit of predicted and true generalization gaps over the networks (R̄2 = 0.97). Fig. 4 (middle) and Fig. 4 (right) compare a CIFAR-100 residual network and a CIFAR-10 residual network with the same architecture and hyperparameters. Under these settings, the CIFAR-100 network achieves 44% test accuracy, whereas CIFAR-10 achieves 61%. The resulting normalized margin density plots clearly reflect the better generalization achieved by CIFAR-10: the densities at all layers are wider and shifted to the right. Thus, the normalized margin distributions reflect the relative “difficulty” of a particular dataset for a given architecture.
5 DISCUSSION
We have presented a predictor for generalization gap based on margin distribution in deep networks and conducted extensive experiments to assess it. Our results show that our scheme achieves a high adjusted coefficient of determination (a linear regression predicts generalization gap accurately). Specifically, the predictor uses normalized margin distribution across multiple layers of the network. The best predictor uses quartiles of the distribution combined in multiplicative way (additive in log transform). Compared to the strong baseline of spectral complexity normalized output margin (Bartlett et al., 2017), our scheme exhibits much higher predictive power and can be applied to any feedforward network (including ResNets, unlike generalization bounds such as (Bartlett et al., 2017; Neyshabur et al., 2017b; Arora et al., 2018)). We also find that using hidden layers is crucial for the predictive power. Our findings could be a stepping stone for studying new generalization theories and
new loss functions with better generalization properties. We include the results on cross architecture and cross data comparison as well as some final thoughts in Appendix Sec. 9.
ACKNOWLEDGMENTS
We are thankful to Gamaleldin Elsayed (Google), Tomer Koren (Google), Sergey Ioffe (Google), Vighnesh Birodkar (Google), Shraman Ray Chaudhuri (Google), Kevin Regan (Google), Behnam Neyshabur (NYU), and Dylan Foster (Cornell), for discussions and helpful feedbacks.
6 APPENDIX: EXPERIMENTAL DETAILS
6.1 CNN + CIFAR-10
We use an architecture very similar to Network in Network (Lin et al. (2013)), but we remove all dropout and max pool from the network.
To create generalization gap in this model, we make the following modification to the base architecture:
1. Use channel size of 192, 288, and 384 to create different width
2. Train with and without batch norm at all convolutional layers
3. Apply dropout at layer 3 and 6 with p = 0.0, 0.2, 0.5
4. Apply l2 regularization with λ = 0.0, 0.001, 0.005
5. Train with and without data augmentation with random cropping, flipping and shifting
6. Train each configuration twice
In total this gives us 3× 2× 3× 3× 2× 2 = 216 different network architectures. The models are trained with SGD with momentum (α = 0.9) at minibatch size of 128 and intial learning rate of 0.01. All networks are trained for 380 epoch with 10× learning rate decay at interval of 100 epoch.
6.2 RESNET 32 + CIFAR-10
For this experiments, we use the standard ResNet 32 architectures. We consider down sampling to the marker of a stage, so there are in total 3 stages in the ResNet 32 architecture. To create generalization gap in this model, we make the following modifications to the architecture:
1. Use network width that are 1×, 2×, 4× wider in number of channels. 2. Train with batch norm or group norm (Wu & He, 2018)
3. Train with initial learning rate of 0.01, 0.001
4. Apply l2 regularization with λ = 0.0, 0.02, 0.002
5. Trian with and without data augmentation with random cropping, flipping and shifting
6. Train each configuration 3 times
In total this gives us 3× 2× 2× 3× 2× 3 = 216 different network architectures. The models are trained with SGD with momentum (α = 0.9) at minibatch size of 128. All networks are trained for 380 epoch with 10× learning rate decay at interval of 100 epoch.
6.3 RESNET 32 + CIFAR-100
For this experiments, we use the standard ResNet 32 architectures. We consider down sampling to the marker of a stage, so there are in total 3 stages in the ResNet 32 architecture. To create generalization gap in this model, we make the following modifications to the architecture:
1. Use network width that are 1×, 2×, 4× wider in number of channels. 2. Train with batch norm or group norm (Wu & He, 2018) 3. Train with initial learning rate of 0.1, 0.01, 0.001 4. Apply l2 regularization with λ = 0.0, 0.02, 0.002 5. Trian with and without data augmentation with random cropping, flipping and shifting 6. Train each configuration 3 times
In total this gives us 3× 2× 3× 3× 2× 3 = 324 different network architectures. The models are trained with SGD with momentum (α = 0.9) at minibatch size of 128. All networks are trained for 380 epoch with 10× learning rate decay at interval of 100 epoch.
7 APPENDIX: MORE REGRESSION RESULTS
7.1 ANALYSIS WITH NEGATIVE MARGINS
The last two rows contain the results of including the negative margins and regress against both the gap (generalization gap) and against acc (test accuracy). We see that including when negative margin is included, it is in general easier to predict the accuracy of the models rather than the gap itself. For convenience, we have reproduced Table 1.
7.2 ANALYSIS FOR INDIVIDUAL LAYER’S MARGIN DISTRIBUTIONS
This is a comparison of different individual layer’s predictive power by using only the margin distribution at that layer. This results illustrates the importance of margins in the hidden layers.
8.1 CNN + CIFAR-10 + ALL QUARTILE SIGNATURE
8 APPENDIX: FURTHER ANALYSIS OF REGRESSION
8.2 RESNET 32 + CIFAR-10 + ALL QUARTILE SIGNATURE
8.3 RESNET 32 + CIFAR-100 + ALL QUARTILE SIGNATURE
9 APPENDIX: SOME OBSERVATIONS AND CONJECTURES
Everythig here uses the full quartile description.
9.1 CROSS ARCHITECTURE COMPARISON
We perform regression analysis with both base CNN and ResNet32 on CIFAR-10. The resulting R̄2 = 0.91 and the k-fold R2 = 0.88. This suggests that the same coefficient works generally well across architectures provided they are trained on the same data. Somehow, the distribution at the 3 locations of the networks are comparable even though the depths are vastly different.
9.2 CROSS DATASET COMPARISON
We perform regression analysis with ResNet32 on both CIFAR-10 and CIFAR-100. The resulting R̄2 = 0.96 and the k-fold R2 = 0.95. This suggests that the same coefficient works generally well across dataset of the same architecture.
9.3 CROSS EVERYTHING
We join all our experiment data and the resulting The resulting R̄2 = 0.93 and the k-fold R2 = 0.93. It is perhaps surprising that a set of coefficient exists across both datasets and architectures.
9.4 IMPLICATIONS ON GENERALIZATION BOUNDS
We believe that the method developed here can be used in complementary with existing generalization bound; more sophisticated engineering of the predictor may be used to actually verify what kind of function the generalization bound should look like up to constant factor or exponents; it may be helpful for developing generalization bound tighter than the existing ones. | 1. How does the reviewer assess the contribution and novelty of the paper's proposal?
2. What are the strengths of the paper regarding its empirical evidence and connections to generalization gaps?
3. What are the weaknesses of the paper regarding its notation, clarity, and assumptions?
4. How does the reviewer evaluate the significance of the proposed margin statistics and their prescriptive insights towards understanding generalization in deep neural networks? | Review | Review
This paper does not even try to propose yet another "vacuous" generalization bounds, but instead empirically convincingly shows an interesting connection between the proposed margin statistics and the generalization gap, which could well be used to provide some "prescriptive" insights (per Sanjeev Arora) towards understanding generalization in deep neural nets.
I have no major complaints but for a few questions regarding clarifications,
1. From Eq.(5), such distances are defined for only one out of the many possible pairs of labels. So when forming the so-called "margin signature", how exactly do you compose it from all such pair-wise distances? Do you pool all the distances together before computing the statistics, or do you aggregate individual statistics from pair-wise distances? And how do you select which pairs to include or exclude? Are you assuming "i" is always the ground-truth label class for $x_k$ here?
2. In Eq.(3), the way you define the distance (that flipping i and j would change the sign of the distance) is implying that {i, j} should not be viewed as an unordered pair, in which case a better notation might be (i, j) (i.e. replacing sets "{}" with tuples "()" to signal that order matters).
And why do you "only consider distances with positive sign"? I can understand doing this for when neither i nor j corresponds to the ground-truth label of x, because you really can't tell which score should be higher. But when i happens to be the ground-truth label, wouldn't a positive distance and a negative distance be meaningful different and therefore it should only be beneficial to include both of them in the margin samples?
And a minor typo: In Eq.(4), $\bar{x}_k$ should have been $\bar{x}^l$? |
ICLR | Title
Modeling Variable Space with Residual Tensor Networks for Multivariate Time Series
Abstract
Multivariate time series involve a series of valuable applications in the real world, and the basic premise of which is that multiple variables are interdependent. However, the relationship between variables in the latent space is dynamic and complex, and as the time window increases, the size of the space also increases exponentially. For fully exploiting the dependencies in the variable space, we propose Modeling Variable Space with Residual Tensor Networks (MVSRTN) for multivariate time series. In this framework, we derive the mathematical representation of the variable space, and then use a tensor network based on the idea of low-rank approximation to model the variable space. The tensor components are shared to ensure the translation invariance of the network. In order to improve the ability to model long-term sequences, we propose an N-order residual connection approach and couple it to the space-approximated tensor network. Moreover, the seriesvariable encoder is designed to improve the quality of the variable space, and we use the skip-connection layer to achieve the dissemination of information such as scale. Experimental results verify the effectiveness of our proposed method on four multivariate time series forecasting benchmark datasets.
1 INTRODUCTION
Multivariate time series are ubiquitous in the real world, such as energy collection and consumption (Rangapuram et al., 2018), traffic occupancy (Lai et al., 2018), financial market trends (Shih et al., 2019), and aerospace data (Zhao et al., 2020). They mainly consist of sensor data sampled on multiple variables at different timestamps. In order to accurately forecast future time series, it is necessary to learn the laws and the dynamic interaction information between variables that exist in the series history (Wu et al., 2020; Xu et al., 2020; Cheng et al., 2020).
For illustrating the dependencies in the multivariate time series, we focus on examples through case study. In Figure 1(a), a certain variable itself has periodic laws, and the dependency relationship in this respect can be well captured by the short-term and long-term model of the modeling sequence. As shown in Figure 1(b), there are dynamic dependency relations between the variables. The “arrow” indicates the trend of data changes. In 2011, the “Switzerland” variable changed significantly earlier than the other in 2013, the trends of the three variables occurred almost simultaneously.
In literature, there have been some approaches to capture interactions in series history from different perspectives. research utilizing traditional deep learning components such as CNNs, RNNs, and attention mechanisms are more inclined to learn long-term and short-term sequence states, and fail to make full use of the entanglement between variables in the latent space (Lai et al., 2018; Shih et al., 2019; Cheng et al., 2020). New mathematical tools such as graph neural networks and tensor calculation methods have inspired the research on the spatial Relations in multivariate time series. (Wu et al., 2020) uses graph neural network to extract one-way relationships between variable pairs in the end-to-end framework. The TLASM (Xu et al., 2020) adopts tensor calculation to model the trend in the multivariate time series, and combines the core tensor components into the LSTM.
Nevertheless, most of the above-mentioned schemes for explicitly modeling spatio-temporal dependencies tend to bulid low-dimensional variable spaces, but this is still incomplete for the variable spaces shown in Figure 1(c) of. In other words, in the high-dimensional variable space evolved over time, the ideal state for multivariate time series is that the model can capture the effective relationship between variables as fully as possible. To address the above challenges, we first define
the high-dimensional variable space mathematically through the tensor product operator. Then, the proposed MVSRTN model models the variable space of multivariate time series.
In the proposed scheme, N-Order Residual Tensor Network is the core component. The tensor network is based on the idea of low-rank approximation, which refers to the mode of decomposing and truncating the extremely large dependent space through tensor decomposition methods (Ma et al., 2019; Zhang et al., 2019) in the case of a certain error. For modeling the exponential variable space, we first adopt the Tensor-Train (Oseledets, 2011) algorithm satisfying the demands of processing different timestamps to decompose the variable space representation, and then obtain the space-approximated tensor network by tensor contraction. However, for long-term sequences, the continuous multiplication in the tensor contraction calculation would result in series of gradient problems. Inspired by ordinary differential equations, we extend and introduce N-order Euler discretization method, and the first-order approach can be thought of as residual connection. Finally, the space-approximated tensor network and the proposed N-order residual connection are coupled to model the effective interdependence in the variable space.
In addition, Series-Variable Encoder aims at improving the quality of the variable space by reintegrating the time series. In this component, the combination of 1D CNN and self-attention mechanism at series level and variable level effectively enrich the dependency information mapped to the latent space. For making MVSRTN more robust, we use the linear Skip-Connection Layer to directly propagate the input scale and coding scale information to the Output Layer. Our contributions are of three-folds:
• We represent the high-dimensional variable space, and through theoretical analysis, we demonstrate the space-approximated tensor network has the ability to model the latent variable space.
• We propose the MVSRTN. In this architecture, a designed encoder combines long-term and short-term interactive information to optimize the variable space, a tensor network coupled by N-order residual connection captures the dynamically changing dependencies, and an additional layer propagates the scale features.
• We perform extensive experiments on four benchmark datasets and the results demonstrate the effectiveness of MVSRTN in evaluation metrics.
2 RELATED WORK
Multivariate Time Series In the real world, a large number of multivariate time series tasks are derived from the fields of finance, economy, and transportation. The purpose of this type of task is to capture or predict the trend of a set of variables that change dynamically over time. In the research history of multivariate time series, researchers have reached a basic consensus that multiple variables are interdependent. After a period of development, statistical machine learning models such as Vector Auto-Regressive (Box et al., 2015), Gaussian Process (Frigola, 2015) algorithm can linearly model multivariate time series data, but the complexity of the model expands dramatically with the increase of time or variables. In recent years, with the rise of deep learning, neural networks implicitly capture the dependence of data in a non-linear mode. LSTNet (Lai et al., 2018) uses CNN and RNN to extract the local-term dependent mode of the time series, and extracts the long-term
mode by recurrent-skip layer. Based on the attention mechanism and the LSTNet model, the TPALSTM (Shih et al., 2019) model can capture long-term dependencies across multiple time steps. However, algorithms based on neural networks only explicitly model the interaction in the time dimension, and fail to fully analyze the relationships in the latent variable space. The MTCNN (Wu et al., 2020) model based on graph neural network technology explicitly extracts the relationship between variables and achieves the effect of SOTA. However, only the one-way relationship between variable pairs is processed, which is not enough for the dynamic and complex variable space. Therefore, we consider using the idea of approximation to model the variable space.
Tensor Network for Dependent Space The tensor network is a powerful mathematical modeling framework originated from many-body physics, which learns the effective associations in the exponential Hilbert space through the idea of low-rank approximation (Stoudenmire & Schwab, 2016). Based on the training approach of deep learning, Tensor networks just need to approximate the dependent space by training the decomposed parameter space. In the field of natural language, the TSLM (Zhang et al., 2019) model proves that the theoretical model obtained by tensor decomposition of the high-dimensional semantic space has a stronger ability to capture semantic dependence than the RNNs network. The u-MPS (Zauner-Stauber et al., 2018) tensor network based on Tensor-Train decomposition (Oseledets, 2011) is also effective for probabilistic sequence modeling. Tensor networks have natural advantages in the field of modeling high-dimensional dependent spaces (Miller et al., 2021). However, due to the gradient problem, the existing tensor network modeling schemes are still difficult to deal with real-world sequence data with long periods.
Euler Discretization and Residual Connection The residual connection method can solve the problem of the gradient disappearance of the deep network (He et al., 2016). When the gradient is saturated, it can maintain the stability of the network according to the principle of identity mapping. Based on the theory of differential dynamics, the residual connection can be considered as a discretization application of the first-order Euler solver (Chen et al., 2018). Higher-order solvers (for example, the second-order Runge-Kutta) reduce the approximation error caused by the identity mapping (Zhu et al., 2019; Li et al., 2021), and also have better ability to process continuously sampled data (Demeester, 2020).
3 PROBLEM STATEMENT
In this paper, we denote the multivariate time series task as follows. Formally, given the time series signal of T points of time X = {y1,y2, · · · ,yT }>, where yt = {vt,1, vt,2, · · · , vt,D}> ∈ RD, T is the number of time steps andD is the number of variables. We leverage X ∈ RT×D as time series history to forecast the series of next time stamp yT+h, where h ≥ 0 and it is the desirable horizon which is ahead of the current time stamp. It is worth mentioning that the range of the prediction task h varies on different tasks.
The main purpose of this paper is to model the effective variable dependency in latent space. Hence, we formulate the variable space as shown in Definition 1. Definition 1. The tensor product operator is adopted to fully interact with the variable series yt indexed by different timestamps. Taking y1 and y2 as an example, the interactive function is:
Ty1,y2 = y1 ⊗ y2 = v1,1v2,1 · · · v1,Dv2,1... . . . ... v1,1v2,D · · · v1,Dv2,D (1) where Ty1,y2 ∈ RD×D, and we define the whole variable space T ∈ RD T as:
Ty1,··· ,yT = y1 ⊗ y2 ⊗ · · · ⊗ yT (2)
In the model architecture, we will detail how to model the proposed variable space.
4 MODEL ARCHITECTURE
In this section, we introduce the proposed MVSRTN architecture as shown in Figure 2. The SeriesVariable Encoder is utilized to optimize the original series data. The N-Order Residual Tensor
Network approximates the variable space, captures the dynamic dependence between variables over time, and ensures the ability to model long-term sequences. We leverage Skip-Connection Layer components to memory coding and scale information. Finally, the loss function exploited by MVSRTN is mentioned.
4.1 SERIES-VARIABLE ENCODER
The Series-variable Encoder is the first part of MVSRTN, in this component, 1D CNN layer is first exploited to extract short-term patterns in the time dimension as well as local dependencies between variables:
ct = GeLU(WC ∗ yt + bC) (3)
where ∗ represents convolution operation, WC ∈ RkD×d and bC ∈ Rd are all the weight parameters, d is the number of filters, k is the kernel size, and GeLU is an activation function (Hendrycks & Gimpel, 2016). After the above operations and zero-padding, the new sequence representation C = {c1, · · · , cT }> ∈ RT×d can be obtained. In addition, the proposed model gains the representation that enriches the global interaction information by self-attention mechanism at series level and variable level, denoted as G ∈ RT×d,
GS = Softmax(Tril( QSK > S√
d ))VS
GV = Softmax( QV K > V√
T )VV
G = ReLU(Concat(GS ;G > V )WG + bG)
(4)
where GS is the output of series-level self-attention, QS = CW Q S , KS = CW K S , VS = CW V S , and WS ∈ Rd×d are all weight parameters. GV is from variable-level self-attention, QV = C>WQV , KV = C >WKV , VV = C >W VV , and WV ∈ RT×T . Moreover, ReLU is an activation function (Nair & Hinton, 2010), Softmax means normalization, Tril refer to lower triangular matrix and WG ∈ R2d×d. A gate structure is leveraged to fuse information :
X ′ = Sigmoid(α)×C + Sigmoid(β)×G (5)
where α and β are all scale parameters, and Sigmoid is an function (Leshno et al., 1993) commonly used as gates. Finally, we achieve the encoded high-quality multivariate time series, and then map it to the latent variable space.
4.2 N-ORDER RESIDUAL TENSOR NETWORK
In this section, we first give a theoretical analysis of the approximate variable space of the tensor network, and secondly, introduce the algorithm flow of the component.
In the theoretical analysis part, we build the encoded series X ′ = {x1, · · · ,xT }> ∈ RT×d into a variable space. Compared to the Definition 1, based on the time series history X ′, we represent the final variable space as follows:
Tx1,··· ,xT = x1 ⊗ x2 ⊗ · · · ⊗ xT (6)
In order to learn the effective interactive information in the variable space T , we decompose its representation into a tensor networkA that is contracted from T parameter tensorsW(t) ∈ Rrt−1×rt×d:
A =W(1) · · · W(T )
= r1∑ h1=1 r2∑ h2=1 · · · rT−1∑ hT−1=1 Wh0h1x1 W h1h2 x2 · · ·W hT−1hT xT
(7)
where xt ∈ Rd and ht ∈ Rrt refer to the indexes ofW(t), and the dimension of the index ht is the truncated rank of the tensor. This set of values [rt] determines the degree to which A approximates the variable space T . is tensor contraction, which can be understood as the dot product between tensors. As shown in Eq.(7), tensor contraction is essentially an addition operation on indexes with the same dimensions to eliminate this index.
It is worth noting thatA and T are equal when the error is allowed. The above theory is demonstrated in Claim 1, which illustrates that the decomposed parameter space A has the ability to model the variable space. Claim 1. The tensor network A is obtained by tensor-train decomposition of the variable space T . When the rank of the parameter tensors in Eq.(7) is not truncated, A=T , and their complexity are all equal to O(dT ) ; When the following inequalities are satisfied, for the prescribed accuracy , A can be considered as the optimal approximation of the space T .
‖T − A‖F ≤ ‖T ‖F (8)
The above is the theoretical discussion of modeling variable space through the idea of low-rank approximation. From a practical point of view, updating the tensor network through automatic differentiation is an effective way to reduce its loss with the variable space. Next, we give the specific modeling process of the N-Order Residual Tensor Network component.
For ensuring the probability normalization of the tensor network input (Stoudenmire & Schwab, 2016), L2 regularization is performed on the sequence vector xt in X ′ at first:
φ(xt) = L2(xt) (9)
As shown in Figure 3(a), the normalized series φ(xt) is condensed with the network to forecast the future distribution of variables. For avoiding confusion in the position of different timestamps, the network is modeled step by step in the direction of time, and the parameter tensors
are shared (Zauner-Stauber et al., 2018), namelyW(1) = · · · =W(T ) =W ∈ Rr×r×d:
ht = A(φ(xt),ht−1;W) = ht W φ(xi), t ∈ [1, · · · , T ]
(10)
where ht ∈ Rr, h0 refers to all one vector, refers to tensor contraction and r is set as a hyperparameter bond-dimension (which is adopted to control the ability to approximate the variable space of the tensor network, and the upper limit is d T 2 ).
For long-term sequences, only the space-approximated tensor network will result in gradient problems. In response to the issue, we propose the N-order residual connection approach. First, compared with the tensor network structure mentioned above, we add the residual term when modeling the series information of the current time stamp (as shown in Figure 3(b)):
ht = H(φ(xt),ht−1;W) = ht−1 + A(φ(xt),ht−1;W), t ∈ [1, . . . , T ]
(11)
Inspired by the numerical solution of ordinary differential equations, its high-order solvers have lower truncation error and higher convergence stability, we propose a tensor network combined with the N-order residual connection as shown in Figure 3(c):
ht = H(φ(xt),ht−1;W)
= ht−1 + N∑ j=1 ajzj , t ∈ [1, . . . , T ] (12)
{ zj = A(φ(xt),ht−1;W) j = 1 zj = A(φ(xt) + bj ,xt−1 + bjzj−1;W) 2 ≤ j ≤ N
where aj and bj are all fixed coefficients. Please see the Appendix A for the coefficient values calculated by the Taylor’s Formula of the different order residual connections.
xU = hTWU + bU (13)
Finally, a linear component maps hT ∈ Rr to xU ∈ RD, where WU ∈ Rr×D and bU are all parameter weights.
4.3 SKIP CONNECTION LAYER
The layer is added after the input layer and encoder layer respectively, which directly connected to the Output Layer. For multivariate time series, numerous nonlinear structures in the model make the output scale insensitive to the input scale (Lai et al., 2018), resulting in a significant reduction in the prediction accuracy of the model. In addition, in order to memorize the encoder states, we also adopt the layer to directly propagate the encoding information to the output.
xL = LWL + bL (14)
the known input data is X = {y1,y2, · · · ,yT }> ∈ RT×D, and the selected series is L = {yT−l+1, · · · ,yT } ∈ RD×l, WL ∈ Rl×1 and bL ∈ R1 are all weight parameters.
xM = (MWM + bM )WO + bO (15)
the selected series is M = {xT−m+1, · · · ,xT } ∈ Rd×m, where the encoder series is X ′ = {x1,x2, · · · ,xT }> ∈ RT×d, and weight parameters are WM ∈ Rm×1, bM ∈ R1, WO ∈ Rd×D and bO ∈ RD.
4.4 OUTPUT LAYER AND LOSS FUNCTION
Finally, the prediction result at time stamp s = T + h of the network is Y ′s ∈ RD:
Y ′s = x U + xL + xM (16)
Moreover, we utilize the validation set to decide which of the two loss functions is better for our model. we leverage Mean Absolute Error (MAE, also denoted as L1 loss function) and Mean Square
Error (MSE, also denoted as L2 loss function) to update the network parameters. MSE is simple to calculate, which is the default loss function of many prediction tasks, and MAE has better robustness for outliers.
MSE = ∑
s∈Ωtrain D∑ i=1 (Ys,i − Y ′s,i)2
MAE = ∑
s∈Ωtrain D∑ i=1 |Ys,i − Y ′s,i|
(17)
where Ωtrain is a set of time stamps for model training.
5 EXPERIMENTS
In the section, we perform extensive experiments on four multivariate time series forecasting benchmark datasets between MVSRTN and seven baselines. Moreover, in order to evaluate the effectiveness of the proposed model, we conduct ablation experiments and order experiments.
5.1 DATASETS AND BASELINES
Our experiments are based on three datasets which are publicly available:
• Traffic 1: This dataset is composed of 48 months hourly data from the California Department of Transportation during 2015 and 2016. The data describes the road occupancy rates (between 0 and 1) measured by different sensors on San Francisco Bay area freeways.
• Exchange-Rate: A dataset that consist of the daily exchange rates of eight countries including Australia, British, Canada, Switzerland, China, Japan, New Zealand and Singapore ranging from 1990 to 2016.
• Solar-Energy 2: This dataset consists of the solar power production records in the year of 2006 and these records are sampled every 10 minutes from 137 PV plants in Alabama State.
• Electricity 3: This dataset from the UCI Machine Learning Repository is composed of the electricity consumption in kWh which was recorded every 15 minutes from 2012 to 2014 for 321 clients.
We compare with a number of forecasting methods. AR: An auto-regressive model for time series forecasting. GP (Frigola, 2015; Roberts et al., 2012): A Gaussian Process time series model for prediction. LSTNet-skip (Lai et al., 2018): A LSTNet model (A deep neural network that combines convolutional neural networks and recurrent neural networks, and could catch the long-term and short-term patterns of time series) with skip-RNN layer. LSTNet-Attn (Lai et al., 2018): A LSTNet model with temporal attention layer. TPA-LSTM (Shih et al., 2019): An attention-recurrent neural network for multivariate time series forecasting. MTGNN (Wu et al., 2020): A graph neural network model for the forecasting of multivariate time series. MTGNN-sampling (Wu et al., 2020): MTGNN model trained on a sampled subset of a graph in each iteration.
All methods are evaluated with two metrics: root relative squared (RSE) and empirical correlation Coefficient (CORR). Lower value is better for RSE while higher value is better for CORR. In addition, the details of the datasets and metrics are in Appendix B.
5.2 TRAINING DETAILS
For all tasks, we implement our model with Pytorch-1.20, and train them on a NVIDIA GeForce RTX 2080 Ti GPU. For Traffic and Solar datasets, the size of the series history window T is 168, for Solar-Energy it is 144, and for Exchange-rate it is 120. All the weight parameters are initialized with Xavier (Glorot & Bengio, 2010) and normalized by weight normalization (Salimans &
1http://pems.dot.ca.gov 2http://www.nrel.gov/grid/solar-power-data.html 3https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014
Kingma, 2016). As for learning method, we use the Adam optimizer (Kingma & Ba, 2014) and an exponentially decaying learning rate with a linear warm up. The initial learning rate is set from 0.0005 to 0.001 and the batch size is 128. We perform dropout after each layer, except input and output ones, and the rate usually is set to 0.1 or 0.2. The 1D CNN component has 50 filters, the filter size is set from 3 to 6. In the tensor network, the bond-dimension r is tuned range from 50 to 100, and the order of the residual connection is chosen from {1,2,4}. For the chosen size l and m in Skip-Connection Layer, the l is 24 for all datasets. and the m is set to the history window size T for different datasets.
5.3 MAIN RESULTS
Table 1 shows the prediction performances of all the models on all datasets, and the metrics are RSE and CORR. We set horizon = {6, 12}, respectively, which means the horizons are set to 6 or 24 days over the Exchange-Rate dataset, to 6 or 24 hours over the Electricity and Traffic task, to 60 or 240 minutes over the Solar-Energy data. It is worth noting that the larger the horizons, the harder the prediction tasks. The best result for each metric of different datasets is highlighted in bold face in the table.
Compared with the MTGNN model (state-of-the-art method), our proposed model achieves competitive results. In particular, for the exchange-rate dataset, the results of MVSRTN on the four (horizon, metric) pairs all outperform the MTGNN model. For the Solar-Energy data when horizon=24, MVSRTN lowers down RSE by 2.46%. In general, in addition to the traffic dataset, our method has a greater advantage when the horizon is 24. The reasons are that the graph modeling method of MTGNN is more suitable for traffic data, and when the series data is difficult to predict (horizon=24), our model plays the role of modeling semantic space.
5.4 ABLATION EXPERIMENTS
To demonstrate the effectiveness of each model structure on Solar-Energy, Traffic, Electricity and Exchange-Rate data, we compare MVSRTN with 4 variants as follows:
• MVSRTN-nA: We remove the self-attention at series and variable level component such that there is a lack of global interactive information at series level and variable level.
As shown in Figure 4, reported results are the average of 10 runs. For all the datasets, removing the residual tensor networks and removing the skip-connection layer both cause significant performance degradation. In particular, for the CORR metric of the Exchange-Rate dataset, the MVSRTN-nS variant basically loses the ability to accurately forecast, which means the exchange rate data is more sensitive to scale information. Moreover, solar energy and electricity data are more obviously affected by residual tensor network components.
5.5 ORDER ANALYSIS
In order to verify the effects of different orders of the residual tensor network, as shown in Figure 5, we design the comparative experiments over the orders of 1, 2, 4 on the Solar-Energy, Traffic, Electricity and Exchange-Rate datasets. The experimental results show that the second-order residual tensor network (RTN) achieves the best results on both RSE and CORR metrics for solar energy data and exchange rate data. For traffic data, the effect of fourth-order RTN is better than that of first-order and second-order. In addition, the result of the fourth-order RTN is similar to the secondorder RTN on the Electricity dataset. This verifies the effectiveness of the proposed higher-order residual connection.
6 CONCLUSION
In this paper, we propose a novel framework (MVSRTN) for fully exploiting the dependencies in the variable space in the task of multivariate time series forecasting. We exploit a tensor network based on the idea of low-rank approximation to model the variable space and shared tensor components to ensure the translation invariance of the network. Additionly, we propose an N-order residual connection approach and couple it to the space-approximated tensor network to improve the ability to model long-term sequences. Moreover, a series-variable encoder is designed to improve the quality of the variable space, and we leverage the skip-connection layer to achieve the dissemination of information such as scale. According to the comparison with 7 baselines, we show the efficiency of the framework of MVSRTN , and experimental results show the effect of modeling variable space for multivariate time series forecasting.
A THE COEFFICIENT VALUES OF RESIDUAL TENSOR NETWORK
Table 2 shows the coefficients used in Equation 4.2.
B DATASETS AND METRICS
In Table 3, we summarize statistics of benchmark datasets. Moreover, two conventional evaluation metrics exploited by us are defined as follows:
Root Relative Squared Error (RSE):
RSE =
√∑ s∈Ωtest(Ys,i − Y ′ s,i)
2√∑ s∈Ωtest(Ys,i −mean(Y )) 2 (18)
Empirical Correlation Coefficient (CORR):
CORR = 1
n n∑ i=1 ∑ s(Ys,i −mean(Yi))(Y ′s,i −mean(Y ′i ))√∑ s(Ys,i −mean(Yi))2(Y ′s,i −mean(Y ′i ))2
(19)
where Ωtest is a set of time stamps for model test, and lower value is better for RSE while higher value is better for CORR. | 1. What is the main contribution of the paper regarding multivariate time series forecasting?
2. What are the strengths and weaknesses of the proposed model MVSRTN?
3. How does the reviewer assess the novelty of the paper's contributions?
4. What are some concerns regarding the performance of MVSRTN?
5. Are there any questions about the paper's content, such as typos or missing information? | Summary Of The Paper
Review | Summary Of The Paper
For multivariate time series forecasting, this paper proposes to use a tensor network to model the variable space and improve the quality of the variable space by designing the series-variable encoder. Under the variable space, this paper also proposes an N-order residual connection approach and the skip-connection layer for processing the long-term data. The proposed model MVSRTN achieves good results on some datasets. However, this paper has not well explored MVSRTN, and the results are not very competitive. In addition, there are many typos in the paper.
Review
Pros:
The proposed model is feasible in practice.
Empirical results are not bad.
Cons:
The main contribution (i.e., forming a variable space and decomposing it through tensor train) is not really novel. There have been plenty of works solving dimensional curses through tensor decomposition such as [1,2,3].
In Table 1, there is no clear advantages compared to previous methods. MVSRTN slightly outperforms MTGNN-sampling only on Solar-Energy. On Exchange-Rate, MVSRTN is slightly better. However, On Traffic, MTGNN-sampling is better at 24, and on Electricity, MTGNN-sampling exceeds MVSRTN. Therefore, this causes concerns of the performance of MVSRTN. And if the N-order residual connection approach is indeed efficient at long-term sequences, why will MVSRTN fail on Traffic at 24, while MVSRTN is better at 6?
There are lots of related works about Multivariate Time Series (https://github.com/Alro10/deep-learning-time-series). And more works may need to be considered such as informer[4].
There lacks of investigation on Series-variable Encoder, for example, which self-attention component is more important? The series-level one or the variable-level one?
Punctuation needs to be written at the end of equations.
There lacks an arrow above
b
2
in FIgure 3(c).
I find too many typos in the paper:
"In literature, there have been some approaches to capture interactions in series history from different perspectives. research utilizing" -> "Research utilizing";
"Tril refer to lower triangular" -> "Tril refers to lower triangular"
"we leverage Mean Absolute Error" -> "We leverage Mean Absolute Error"
etc.
I sincerely hope that the authors can write the paper carefully!!
Additional Questions:
Why use the 1-D CNN at first?
Are scale parameters
α
and
β
trainable? How to set them in experiments?
How to split train, validation, test set?
[1] Yu Liu, Quanming Yao, Yong Li. Generalizing Tensor Decomposition for N-ary Relational Knowledge Bases. WWW 2020.
[2] Yinchong Yang, Denis Krompass, Volker Tresp. Tensor-Train Recurrent Neural Networks for Video Classification. ICML 2017.
[3] Yu Pan, Jing Xu, Maolin Wang, Jinmian Ye, Fei Wang, Kun Bai, Zenglin Xu. Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition. AAAI 2019.
[4] Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, Wancai Zhang. Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. AAAI 2021. |
ICLR | Title
Modeling Variable Space with Residual Tensor Networks for Multivariate Time Series
Abstract
Multivariate time series involve a series of valuable applications in the real world, and the basic premise of which is that multiple variables are interdependent. However, the relationship between variables in the latent space is dynamic and complex, and as the time window increases, the size of the space also increases exponentially. For fully exploiting the dependencies in the variable space, we propose Modeling Variable Space with Residual Tensor Networks (MVSRTN) for multivariate time series. In this framework, we derive the mathematical representation of the variable space, and then use a tensor network based on the idea of low-rank approximation to model the variable space. The tensor components are shared to ensure the translation invariance of the network. In order to improve the ability to model long-term sequences, we propose an N-order residual connection approach and couple it to the space-approximated tensor network. Moreover, the seriesvariable encoder is designed to improve the quality of the variable space, and we use the skip-connection layer to achieve the dissemination of information such as scale. Experimental results verify the effectiveness of our proposed method on four multivariate time series forecasting benchmark datasets.
1 INTRODUCTION
Multivariate time series are ubiquitous in the real world, such as energy collection and consumption (Rangapuram et al., 2018), traffic occupancy (Lai et al., 2018), financial market trends (Shih et al., 2019), and aerospace data (Zhao et al., 2020). They mainly consist of sensor data sampled on multiple variables at different timestamps. In order to accurately forecast future time series, it is necessary to learn the laws and the dynamic interaction information between variables that exist in the series history (Wu et al., 2020; Xu et al., 2020; Cheng et al., 2020).
For illustrating the dependencies in the multivariate time series, we focus on examples through case study. In Figure 1(a), a certain variable itself has periodic laws, and the dependency relationship in this respect can be well captured by the short-term and long-term model of the modeling sequence. As shown in Figure 1(b), there are dynamic dependency relations between the variables. The “arrow” indicates the trend of data changes. In 2011, the “Switzerland” variable changed significantly earlier than the other in 2013, the trends of the three variables occurred almost simultaneously.
In literature, there have been some approaches to capture interactions in series history from different perspectives. research utilizing traditional deep learning components such as CNNs, RNNs, and attention mechanisms are more inclined to learn long-term and short-term sequence states, and fail to make full use of the entanglement between variables in the latent space (Lai et al., 2018; Shih et al., 2019; Cheng et al., 2020). New mathematical tools such as graph neural networks and tensor calculation methods have inspired the research on the spatial Relations in multivariate time series. (Wu et al., 2020) uses graph neural network to extract one-way relationships between variable pairs in the end-to-end framework. The TLASM (Xu et al., 2020) adopts tensor calculation to model the trend in the multivariate time series, and combines the core tensor components into the LSTM.
Nevertheless, most of the above-mentioned schemes for explicitly modeling spatio-temporal dependencies tend to bulid low-dimensional variable spaces, but this is still incomplete for the variable spaces shown in Figure 1(c) of. In other words, in the high-dimensional variable space evolved over time, the ideal state for multivariate time series is that the model can capture the effective relationship between variables as fully as possible. To address the above challenges, we first define
the high-dimensional variable space mathematically through the tensor product operator. Then, the proposed MVSRTN model models the variable space of multivariate time series.
In the proposed scheme, N-Order Residual Tensor Network is the core component. The tensor network is based on the idea of low-rank approximation, which refers to the mode of decomposing and truncating the extremely large dependent space through tensor decomposition methods (Ma et al., 2019; Zhang et al., 2019) in the case of a certain error. For modeling the exponential variable space, we first adopt the Tensor-Train (Oseledets, 2011) algorithm satisfying the demands of processing different timestamps to decompose the variable space representation, and then obtain the space-approximated tensor network by tensor contraction. However, for long-term sequences, the continuous multiplication in the tensor contraction calculation would result in series of gradient problems. Inspired by ordinary differential equations, we extend and introduce N-order Euler discretization method, and the first-order approach can be thought of as residual connection. Finally, the space-approximated tensor network and the proposed N-order residual connection are coupled to model the effective interdependence in the variable space.
In addition, Series-Variable Encoder aims at improving the quality of the variable space by reintegrating the time series. In this component, the combination of 1D CNN and self-attention mechanism at series level and variable level effectively enrich the dependency information mapped to the latent space. For making MVSRTN more robust, we use the linear Skip-Connection Layer to directly propagate the input scale and coding scale information to the Output Layer. Our contributions are of three-folds:
• We represent the high-dimensional variable space, and through theoretical analysis, we demonstrate the space-approximated tensor network has the ability to model the latent variable space.
• We propose the MVSRTN. In this architecture, a designed encoder combines long-term and short-term interactive information to optimize the variable space, a tensor network coupled by N-order residual connection captures the dynamically changing dependencies, and an additional layer propagates the scale features.
• We perform extensive experiments on four benchmark datasets and the results demonstrate the effectiveness of MVSRTN in evaluation metrics.
2 RELATED WORK
Multivariate Time Series In the real world, a large number of multivariate time series tasks are derived from the fields of finance, economy, and transportation. The purpose of this type of task is to capture or predict the trend of a set of variables that change dynamically over time. In the research history of multivariate time series, researchers have reached a basic consensus that multiple variables are interdependent. After a period of development, statistical machine learning models such as Vector Auto-Regressive (Box et al., 2015), Gaussian Process (Frigola, 2015) algorithm can linearly model multivariate time series data, but the complexity of the model expands dramatically with the increase of time or variables. In recent years, with the rise of deep learning, neural networks implicitly capture the dependence of data in a non-linear mode. LSTNet (Lai et al., 2018) uses CNN and RNN to extract the local-term dependent mode of the time series, and extracts the long-term
mode by recurrent-skip layer. Based on the attention mechanism and the LSTNet model, the TPALSTM (Shih et al., 2019) model can capture long-term dependencies across multiple time steps. However, algorithms based on neural networks only explicitly model the interaction in the time dimension, and fail to fully analyze the relationships in the latent variable space. The MTCNN (Wu et al., 2020) model based on graph neural network technology explicitly extracts the relationship between variables and achieves the effect of SOTA. However, only the one-way relationship between variable pairs is processed, which is not enough for the dynamic and complex variable space. Therefore, we consider using the idea of approximation to model the variable space.
Tensor Network for Dependent Space The tensor network is a powerful mathematical modeling framework originated from many-body physics, which learns the effective associations in the exponential Hilbert space through the idea of low-rank approximation (Stoudenmire & Schwab, 2016). Based on the training approach of deep learning, Tensor networks just need to approximate the dependent space by training the decomposed parameter space. In the field of natural language, the TSLM (Zhang et al., 2019) model proves that the theoretical model obtained by tensor decomposition of the high-dimensional semantic space has a stronger ability to capture semantic dependence than the RNNs network. The u-MPS (Zauner-Stauber et al., 2018) tensor network based on Tensor-Train decomposition (Oseledets, 2011) is also effective for probabilistic sequence modeling. Tensor networks have natural advantages in the field of modeling high-dimensional dependent spaces (Miller et al., 2021). However, due to the gradient problem, the existing tensor network modeling schemes are still difficult to deal with real-world sequence data with long periods.
Euler Discretization and Residual Connection The residual connection method can solve the problem of the gradient disappearance of the deep network (He et al., 2016). When the gradient is saturated, it can maintain the stability of the network according to the principle of identity mapping. Based on the theory of differential dynamics, the residual connection can be considered as a discretization application of the first-order Euler solver (Chen et al., 2018). Higher-order solvers (for example, the second-order Runge-Kutta) reduce the approximation error caused by the identity mapping (Zhu et al., 2019; Li et al., 2021), and also have better ability to process continuously sampled data (Demeester, 2020).
3 PROBLEM STATEMENT
In this paper, we denote the multivariate time series task as follows. Formally, given the time series signal of T points of time X = {y1,y2, · · · ,yT }>, where yt = {vt,1, vt,2, · · · , vt,D}> ∈ RD, T is the number of time steps andD is the number of variables. We leverage X ∈ RT×D as time series history to forecast the series of next time stamp yT+h, where h ≥ 0 and it is the desirable horizon which is ahead of the current time stamp. It is worth mentioning that the range of the prediction task h varies on different tasks.
The main purpose of this paper is to model the effective variable dependency in latent space. Hence, we formulate the variable space as shown in Definition 1. Definition 1. The tensor product operator is adopted to fully interact with the variable series yt indexed by different timestamps. Taking y1 and y2 as an example, the interactive function is:
Ty1,y2 = y1 ⊗ y2 = v1,1v2,1 · · · v1,Dv2,1... . . . ... v1,1v2,D · · · v1,Dv2,D (1) where Ty1,y2 ∈ RD×D, and we define the whole variable space T ∈ RD T as:
Ty1,··· ,yT = y1 ⊗ y2 ⊗ · · · ⊗ yT (2)
In the model architecture, we will detail how to model the proposed variable space.
4 MODEL ARCHITECTURE
In this section, we introduce the proposed MVSRTN architecture as shown in Figure 2. The SeriesVariable Encoder is utilized to optimize the original series data. The N-Order Residual Tensor
Network approximates the variable space, captures the dynamic dependence between variables over time, and ensures the ability to model long-term sequences. We leverage Skip-Connection Layer components to memory coding and scale information. Finally, the loss function exploited by MVSRTN is mentioned.
4.1 SERIES-VARIABLE ENCODER
The Series-variable Encoder is the first part of MVSRTN, in this component, 1D CNN layer is first exploited to extract short-term patterns in the time dimension as well as local dependencies between variables:
ct = GeLU(WC ∗ yt + bC) (3)
where ∗ represents convolution operation, WC ∈ RkD×d and bC ∈ Rd are all the weight parameters, d is the number of filters, k is the kernel size, and GeLU is an activation function (Hendrycks & Gimpel, 2016). After the above operations and zero-padding, the new sequence representation C = {c1, · · · , cT }> ∈ RT×d can be obtained. In addition, the proposed model gains the representation that enriches the global interaction information by self-attention mechanism at series level and variable level, denoted as G ∈ RT×d,
GS = Softmax(Tril( QSK > S√
d ))VS
GV = Softmax( QV K > V√
T )VV
G = ReLU(Concat(GS ;G > V )WG + bG)
(4)
where GS is the output of series-level self-attention, QS = CW Q S , KS = CW K S , VS = CW V S , and WS ∈ Rd×d are all weight parameters. GV is from variable-level self-attention, QV = C>WQV , KV = C >WKV , VV = C >W VV , and WV ∈ RT×T . Moreover, ReLU is an activation function (Nair & Hinton, 2010), Softmax means normalization, Tril refer to lower triangular matrix and WG ∈ R2d×d. A gate structure is leveraged to fuse information :
X ′ = Sigmoid(α)×C + Sigmoid(β)×G (5)
where α and β are all scale parameters, and Sigmoid is an function (Leshno et al., 1993) commonly used as gates. Finally, we achieve the encoded high-quality multivariate time series, and then map it to the latent variable space.
4.2 N-ORDER RESIDUAL TENSOR NETWORK
In this section, we first give a theoretical analysis of the approximate variable space of the tensor network, and secondly, introduce the algorithm flow of the component.
In the theoretical analysis part, we build the encoded series X ′ = {x1, · · · ,xT }> ∈ RT×d into a variable space. Compared to the Definition 1, based on the time series history X ′, we represent the final variable space as follows:
Tx1,··· ,xT = x1 ⊗ x2 ⊗ · · · ⊗ xT (6)
In order to learn the effective interactive information in the variable space T , we decompose its representation into a tensor networkA that is contracted from T parameter tensorsW(t) ∈ Rrt−1×rt×d:
A =W(1) · · · W(T )
= r1∑ h1=1 r2∑ h2=1 · · · rT−1∑ hT−1=1 Wh0h1x1 W h1h2 x2 · · ·W hT−1hT xT
(7)
where xt ∈ Rd and ht ∈ Rrt refer to the indexes ofW(t), and the dimension of the index ht is the truncated rank of the tensor. This set of values [rt] determines the degree to which A approximates the variable space T . is tensor contraction, which can be understood as the dot product between tensors. As shown in Eq.(7), tensor contraction is essentially an addition operation on indexes with the same dimensions to eliminate this index.
It is worth noting thatA and T are equal when the error is allowed. The above theory is demonstrated in Claim 1, which illustrates that the decomposed parameter space A has the ability to model the variable space. Claim 1. The tensor network A is obtained by tensor-train decomposition of the variable space T . When the rank of the parameter tensors in Eq.(7) is not truncated, A=T , and their complexity are all equal to O(dT ) ; When the following inequalities are satisfied, for the prescribed accuracy , A can be considered as the optimal approximation of the space T .
‖T − A‖F ≤ ‖T ‖F (8)
The above is the theoretical discussion of modeling variable space through the idea of low-rank approximation. From a practical point of view, updating the tensor network through automatic differentiation is an effective way to reduce its loss with the variable space. Next, we give the specific modeling process of the N-Order Residual Tensor Network component.
For ensuring the probability normalization of the tensor network input (Stoudenmire & Schwab, 2016), L2 regularization is performed on the sequence vector xt in X ′ at first:
φ(xt) = L2(xt) (9)
As shown in Figure 3(a), the normalized series φ(xt) is condensed with the network to forecast the future distribution of variables. For avoiding confusion in the position of different timestamps, the network is modeled step by step in the direction of time, and the parameter tensors
are shared (Zauner-Stauber et al., 2018), namelyW(1) = · · · =W(T ) =W ∈ Rr×r×d:
ht = A(φ(xt),ht−1;W) = ht W φ(xi), t ∈ [1, · · · , T ]
(10)
where ht ∈ Rr, h0 refers to all one vector, refers to tensor contraction and r is set as a hyperparameter bond-dimension (which is adopted to control the ability to approximate the variable space of the tensor network, and the upper limit is d T 2 ).
For long-term sequences, only the space-approximated tensor network will result in gradient problems. In response to the issue, we propose the N-order residual connection approach. First, compared with the tensor network structure mentioned above, we add the residual term when modeling the series information of the current time stamp (as shown in Figure 3(b)):
ht = H(φ(xt),ht−1;W) = ht−1 + A(φ(xt),ht−1;W), t ∈ [1, . . . , T ]
(11)
Inspired by the numerical solution of ordinary differential equations, its high-order solvers have lower truncation error and higher convergence stability, we propose a tensor network combined with the N-order residual connection as shown in Figure 3(c):
ht = H(φ(xt),ht−1;W)
= ht−1 + N∑ j=1 ajzj , t ∈ [1, . . . , T ] (12)
{ zj = A(φ(xt),ht−1;W) j = 1 zj = A(φ(xt) + bj ,xt−1 + bjzj−1;W) 2 ≤ j ≤ N
where aj and bj are all fixed coefficients. Please see the Appendix A for the coefficient values calculated by the Taylor’s Formula of the different order residual connections.
xU = hTWU + bU (13)
Finally, a linear component maps hT ∈ Rr to xU ∈ RD, where WU ∈ Rr×D and bU are all parameter weights.
4.3 SKIP CONNECTION LAYER
The layer is added after the input layer and encoder layer respectively, which directly connected to the Output Layer. For multivariate time series, numerous nonlinear structures in the model make the output scale insensitive to the input scale (Lai et al., 2018), resulting in a significant reduction in the prediction accuracy of the model. In addition, in order to memorize the encoder states, we also adopt the layer to directly propagate the encoding information to the output.
xL = LWL + bL (14)
the known input data is X = {y1,y2, · · · ,yT }> ∈ RT×D, and the selected series is L = {yT−l+1, · · · ,yT } ∈ RD×l, WL ∈ Rl×1 and bL ∈ R1 are all weight parameters.
xM = (MWM + bM )WO + bO (15)
the selected series is M = {xT−m+1, · · · ,xT } ∈ Rd×m, where the encoder series is X ′ = {x1,x2, · · · ,xT }> ∈ RT×d, and weight parameters are WM ∈ Rm×1, bM ∈ R1, WO ∈ Rd×D and bO ∈ RD.
4.4 OUTPUT LAYER AND LOSS FUNCTION
Finally, the prediction result at time stamp s = T + h of the network is Y ′s ∈ RD:
Y ′s = x U + xL + xM (16)
Moreover, we utilize the validation set to decide which of the two loss functions is better for our model. we leverage Mean Absolute Error (MAE, also denoted as L1 loss function) and Mean Square
Error (MSE, also denoted as L2 loss function) to update the network parameters. MSE is simple to calculate, which is the default loss function of many prediction tasks, and MAE has better robustness for outliers.
MSE = ∑
s∈Ωtrain D∑ i=1 (Ys,i − Y ′s,i)2
MAE = ∑
s∈Ωtrain D∑ i=1 |Ys,i − Y ′s,i|
(17)
where Ωtrain is a set of time stamps for model training.
5 EXPERIMENTS
In the section, we perform extensive experiments on four multivariate time series forecasting benchmark datasets between MVSRTN and seven baselines. Moreover, in order to evaluate the effectiveness of the proposed model, we conduct ablation experiments and order experiments.
5.1 DATASETS AND BASELINES
Our experiments are based on three datasets which are publicly available:
• Traffic 1: This dataset is composed of 48 months hourly data from the California Department of Transportation during 2015 and 2016. The data describes the road occupancy rates (between 0 and 1) measured by different sensors on San Francisco Bay area freeways.
• Exchange-Rate: A dataset that consist of the daily exchange rates of eight countries including Australia, British, Canada, Switzerland, China, Japan, New Zealand and Singapore ranging from 1990 to 2016.
• Solar-Energy 2: This dataset consists of the solar power production records in the year of 2006 and these records are sampled every 10 minutes from 137 PV plants in Alabama State.
• Electricity 3: This dataset from the UCI Machine Learning Repository is composed of the electricity consumption in kWh which was recorded every 15 minutes from 2012 to 2014 for 321 clients.
We compare with a number of forecasting methods. AR: An auto-regressive model for time series forecasting. GP (Frigola, 2015; Roberts et al., 2012): A Gaussian Process time series model for prediction. LSTNet-skip (Lai et al., 2018): A LSTNet model (A deep neural network that combines convolutional neural networks and recurrent neural networks, and could catch the long-term and short-term patterns of time series) with skip-RNN layer. LSTNet-Attn (Lai et al., 2018): A LSTNet model with temporal attention layer. TPA-LSTM (Shih et al., 2019): An attention-recurrent neural network for multivariate time series forecasting. MTGNN (Wu et al., 2020): A graph neural network model for the forecasting of multivariate time series. MTGNN-sampling (Wu et al., 2020): MTGNN model trained on a sampled subset of a graph in each iteration.
All methods are evaluated with two metrics: root relative squared (RSE) and empirical correlation Coefficient (CORR). Lower value is better for RSE while higher value is better for CORR. In addition, the details of the datasets and metrics are in Appendix B.
5.2 TRAINING DETAILS
For all tasks, we implement our model with Pytorch-1.20, and train them on a NVIDIA GeForce RTX 2080 Ti GPU. For Traffic and Solar datasets, the size of the series history window T is 168, for Solar-Energy it is 144, and for Exchange-rate it is 120. All the weight parameters are initialized with Xavier (Glorot & Bengio, 2010) and normalized by weight normalization (Salimans &
1http://pems.dot.ca.gov 2http://www.nrel.gov/grid/solar-power-data.html 3https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014
Kingma, 2016). As for learning method, we use the Adam optimizer (Kingma & Ba, 2014) and an exponentially decaying learning rate with a linear warm up. The initial learning rate is set from 0.0005 to 0.001 and the batch size is 128. We perform dropout after each layer, except input and output ones, and the rate usually is set to 0.1 or 0.2. The 1D CNN component has 50 filters, the filter size is set from 3 to 6. In the tensor network, the bond-dimension r is tuned range from 50 to 100, and the order of the residual connection is chosen from {1,2,4}. For the chosen size l and m in Skip-Connection Layer, the l is 24 for all datasets. and the m is set to the history window size T for different datasets.
5.3 MAIN RESULTS
Table 1 shows the prediction performances of all the models on all datasets, and the metrics are RSE and CORR. We set horizon = {6, 12}, respectively, which means the horizons are set to 6 or 24 days over the Exchange-Rate dataset, to 6 or 24 hours over the Electricity and Traffic task, to 60 or 240 minutes over the Solar-Energy data. It is worth noting that the larger the horizons, the harder the prediction tasks. The best result for each metric of different datasets is highlighted in bold face in the table.
Compared with the MTGNN model (state-of-the-art method), our proposed model achieves competitive results. In particular, for the exchange-rate dataset, the results of MVSRTN on the four (horizon, metric) pairs all outperform the MTGNN model. For the Solar-Energy data when horizon=24, MVSRTN lowers down RSE by 2.46%. In general, in addition to the traffic dataset, our method has a greater advantage when the horizon is 24. The reasons are that the graph modeling method of MTGNN is more suitable for traffic data, and when the series data is difficult to predict (horizon=24), our model plays the role of modeling semantic space.
5.4 ABLATION EXPERIMENTS
To demonstrate the effectiveness of each model structure on Solar-Energy, Traffic, Electricity and Exchange-Rate data, we compare MVSRTN with 4 variants as follows:
• MVSRTN-nA: We remove the self-attention at series and variable level component such that there is a lack of global interactive information at series level and variable level.
As shown in Figure 4, reported results are the average of 10 runs. For all the datasets, removing the residual tensor networks and removing the skip-connection layer both cause significant performance degradation. In particular, for the CORR metric of the Exchange-Rate dataset, the MVSRTN-nS variant basically loses the ability to accurately forecast, which means the exchange rate data is more sensitive to scale information. Moreover, solar energy and electricity data are more obviously affected by residual tensor network components.
5.5 ORDER ANALYSIS
In order to verify the effects of different orders of the residual tensor network, as shown in Figure 5, we design the comparative experiments over the orders of 1, 2, 4 on the Solar-Energy, Traffic, Electricity and Exchange-Rate datasets. The experimental results show that the second-order residual tensor network (RTN) achieves the best results on both RSE and CORR metrics for solar energy data and exchange rate data. For traffic data, the effect of fourth-order RTN is better than that of first-order and second-order. In addition, the result of the fourth-order RTN is similar to the secondorder RTN on the Electricity dataset. This verifies the effectiveness of the proposed higher-order residual connection.
6 CONCLUSION
In this paper, we propose a novel framework (MVSRTN) for fully exploiting the dependencies in the variable space in the task of multivariate time series forecasting. We exploit a tensor network based on the idea of low-rank approximation to model the variable space and shared tensor components to ensure the translation invariance of the network. Additionly, we propose an N-order residual connection approach and couple it to the space-approximated tensor network to improve the ability to model long-term sequences. Moreover, a series-variable encoder is designed to improve the quality of the variable space, and we leverage the skip-connection layer to achieve the dissemination of information such as scale. According to the comparison with 7 baselines, we show the efficiency of the framework of MVSRTN , and experimental results show the effect of modeling variable space for multivariate time series forecasting.
A THE COEFFICIENT VALUES OF RESIDUAL TENSOR NETWORK
Table 2 shows the coefficients used in Equation 4.2.
B DATASETS AND METRICS
In Table 3, we summarize statistics of benchmark datasets. Moreover, two conventional evaluation metrics exploited by us are defined as follows:
Root Relative Squared Error (RSE):
RSE =
√∑ s∈Ωtest(Ys,i − Y ′ s,i)
2√∑ s∈Ωtest(Ys,i −mean(Y )) 2 (18)
Empirical Correlation Coefficient (CORR):
CORR = 1
n n∑ i=1 ∑ s(Ys,i −mean(Yi))(Y ′s,i −mean(Y ′i ))√∑ s(Ys,i −mean(Yi))2(Y ′s,i −mean(Y ′i ))2
(19)
where Ωtest is a set of time stamps for model test, and lower value is better for RSE while higher value is better for CORR. | 1. What is the main contribution of the paper in terms of deep learning architecture for time series forecasting?
2. What are the strengths and weaknesses of the proposed model, particularly regarding its novel component, the residual connection in the tensor network?
3. Do you have any concerns regarding the writing style and explanations provided in the paper?
4. How does the reviewer assess the significance of the proposed methodology and its theoretical understanding?
5. What are the limitations of the paper's experiments and evaluations? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a deep learning architecture for time series forecasting called MVSRTN. The model is composed of 3 blocks with skip-connections in-between them: a 1) "Series-Variable Encoder", 2) an "N-Order Residual Tensor Network", as the authors call it, and 3) the output layer. Out of these, 1) is essentially a 1D CNN combined with (causal and non-causal) self-attention, and 2) seems to be a tensor network built from taking the tensor product of the sequence entries, and then contracting that by a TT-rank-constrained weight tensor. The computation of the tensor network output is formulated as a recursion across time-steps, and motivated by residual networks, an identity mapping with respect to the hidden state at the previous time-stamp is added to the recursive formulation. Then, by the analogy with higher-order solvers for ODEs, a "higher-order residual connection" is introduced. The model is applied to four time series forecasting datasets, where some improvements are achieved, and an ablation study is included to show how the various model blocks contribute to the performance.
Review
Overall, the paper is quite hard to interpret due to its writing style, such as sentence structure, confusing terminology, and convoluted explanations. Most of the model seems to be a straightforward combination of existing building blocks, such as CNN, self-attention, and tensor networks. On the other hand, the proposed novel component, the residual connection in the tensor network, does not seem to be theoretically understood.
Furthermore, the explanation given in Section 4.2 seems to be incorrect. First of all, the tensor in eq.(6) is already a CP rank-1 tensor, and hence, I'm not sure why they would want to approximate this by a TT low-rank tensor, as they say in the explanation between equations (6) and (8). In particular, CP rank-1 implies TT rank-1, so there is nothing to approximate here really. It seems to me what they are actually doing is they are taking a TT-rank-constrained weight tensor, and using that to contract the rank-1 feature tensor from eq.(6), which is a completely different thing than what they claim they do. I was only able to decode that this is what they are actually doing from eq.(10), since the given explanation suggests something else, which makes me question whether the authors properly understand what they are doing. By the line above eq.(10), it seems that this weight tensor is also constrained to be symmetric in the sense that that all of its TT-cores are equal, under which there exist no approximation guarantees for TT-decompositions anyway as far as I know. With this in mind, Claim 1 is completely meaningless regarding the proposed methodology, and its contents seem to be tautological either way.
The residual approach in eq.(11) is given by a straightforward modification of eq.(10), but there is no discussion given about how this translates back to the tensor network structure, or what this means from a theoretical point of view. The higher-order residual connection in eq.(12) is not properly explained, and what the coefficients
α
j
and
β
j
are supposed to stand for (or the details of how they were acquired). Overall, only little discussion is given about the novel idea in the paper, which is combining the recursive formulation of the tensor network with (higher-order) residual connections.
The strength of the paper for me is that the experiments look fairly promising, where it is shown that some improvements can be achieved over some seemingly popular baselines (such as TPA-LSTM and MTGNN) on the task of time series forecasting, which is arguably an important problem. The ablation study is also relatively interesting, which shows how the various model components contribute to performance. However, especially with highly empirical papers such as this, in my opinion the evaluation would have to be more detailed and it could include more than this one task, or at least more datasets should be included for this one. |
ICLR | Title
Modeling Variable Space with Residual Tensor Networks for Multivariate Time Series
Abstract
Multivariate time series involve a series of valuable applications in the real world, and the basic premise of which is that multiple variables are interdependent. However, the relationship between variables in the latent space is dynamic and complex, and as the time window increases, the size of the space also increases exponentially. For fully exploiting the dependencies in the variable space, we propose Modeling Variable Space with Residual Tensor Networks (MVSRTN) for multivariate time series. In this framework, we derive the mathematical representation of the variable space, and then use a tensor network based on the idea of low-rank approximation to model the variable space. The tensor components are shared to ensure the translation invariance of the network. In order to improve the ability to model long-term sequences, we propose an N-order residual connection approach and couple it to the space-approximated tensor network. Moreover, the seriesvariable encoder is designed to improve the quality of the variable space, and we use the skip-connection layer to achieve the dissemination of information such as scale. Experimental results verify the effectiveness of our proposed method on four multivariate time series forecasting benchmark datasets.
1 INTRODUCTION
Multivariate time series are ubiquitous in the real world, such as energy collection and consumption (Rangapuram et al., 2018), traffic occupancy (Lai et al., 2018), financial market trends (Shih et al., 2019), and aerospace data (Zhao et al., 2020). They mainly consist of sensor data sampled on multiple variables at different timestamps. In order to accurately forecast future time series, it is necessary to learn the laws and the dynamic interaction information between variables that exist in the series history (Wu et al., 2020; Xu et al., 2020; Cheng et al., 2020).
For illustrating the dependencies in the multivariate time series, we focus on examples through case study. In Figure 1(a), a certain variable itself has periodic laws, and the dependency relationship in this respect can be well captured by the short-term and long-term model of the modeling sequence. As shown in Figure 1(b), there are dynamic dependency relations between the variables. The “arrow” indicates the trend of data changes. In 2011, the “Switzerland” variable changed significantly earlier than the other in 2013, the trends of the three variables occurred almost simultaneously.
In literature, there have been some approaches to capture interactions in series history from different perspectives. research utilizing traditional deep learning components such as CNNs, RNNs, and attention mechanisms are more inclined to learn long-term and short-term sequence states, and fail to make full use of the entanglement between variables in the latent space (Lai et al., 2018; Shih et al., 2019; Cheng et al., 2020). New mathematical tools such as graph neural networks and tensor calculation methods have inspired the research on the spatial Relations in multivariate time series. (Wu et al., 2020) uses graph neural network to extract one-way relationships between variable pairs in the end-to-end framework. The TLASM (Xu et al., 2020) adopts tensor calculation to model the trend in the multivariate time series, and combines the core tensor components into the LSTM.
Nevertheless, most of the above-mentioned schemes for explicitly modeling spatio-temporal dependencies tend to bulid low-dimensional variable spaces, but this is still incomplete for the variable spaces shown in Figure 1(c) of. In other words, in the high-dimensional variable space evolved over time, the ideal state for multivariate time series is that the model can capture the effective relationship between variables as fully as possible. To address the above challenges, we first define
the high-dimensional variable space mathematically through the tensor product operator. Then, the proposed MVSRTN model models the variable space of multivariate time series.
In the proposed scheme, N-Order Residual Tensor Network is the core component. The tensor network is based on the idea of low-rank approximation, which refers to the mode of decomposing and truncating the extremely large dependent space through tensor decomposition methods (Ma et al., 2019; Zhang et al., 2019) in the case of a certain error. For modeling the exponential variable space, we first adopt the Tensor-Train (Oseledets, 2011) algorithm satisfying the demands of processing different timestamps to decompose the variable space representation, and then obtain the space-approximated tensor network by tensor contraction. However, for long-term sequences, the continuous multiplication in the tensor contraction calculation would result in series of gradient problems. Inspired by ordinary differential equations, we extend and introduce N-order Euler discretization method, and the first-order approach can be thought of as residual connection. Finally, the space-approximated tensor network and the proposed N-order residual connection are coupled to model the effective interdependence in the variable space.
In addition, Series-Variable Encoder aims at improving the quality of the variable space by reintegrating the time series. In this component, the combination of 1D CNN and self-attention mechanism at series level and variable level effectively enrich the dependency information mapped to the latent space. For making MVSRTN more robust, we use the linear Skip-Connection Layer to directly propagate the input scale and coding scale information to the Output Layer. Our contributions are of three-folds:
• We represent the high-dimensional variable space, and through theoretical analysis, we demonstrate the space-approximated tensor network has the ability to model the latent variable space.
• We propose the MVSRTN. In this architecture, a designed encoder combines long-term and short-term interactive information to optimize the variable space, a tensor network coupled by N-order residual connection captures the dynamically changing dependencies, and an additional layer propagates the scale features.
• We perform extensive experiments on four benchmark datasets and the results demonstrate the effectiveness of MVSRTN in evaluation metrics.
2 RELATED WORK
Multivariate Time Series In the real world, a large number of multivariate time series tasks are derived from the fields of finance, economy, and transportation. The purpose of this type of task is to capture or predict the trend of a set of variables that change dynamically over time. In the research history of multivariate time series, researchers have reached a basic consensus that multiple variables are interdependent. After a period of development, statistical machine learning models such as Vector Auto-Regressive (Box et al., 2015), Gaussian Process (Frigola, 2015) algorithm can linearly model multivariate time series data, but the complexity of the model expands dramatically with the increase of time or variables. In recent years, with the rise of deep learning, neural networks implicitly capture the dependence of data in a non-linear mode. LSTNet (Lai et al., 2018) uses CNN and RNN to extract the local-term dependent mode of the time series, and extracts the long-term
mode by recurrent-skip layer. Based on the attention mechanism and the LSTNet model, the TPALSTM (Shih et al., 2019) model can capture long-term dependencies across multiple time steps. However, algorithms based on neural networks only explicitly model the interaction in the time dimension, and fail to fully analyze the relationships in the latent variable space. The MTCNN (Wu et al., 2020) model based on graph neural network technology explicitly extracts the relationship between variables and achieves the effect of SOTA. However, only the one-way relationship between variable pairs is processed, which is not enough for the dynamic and complex variable space. Therefore, we consider using the idea of approximation to model the variable space.
Tensor Network for Dependent Space The tensor network is a powerful mathematical modeling framework originated from many-body physics, which learns the effective associations in the exponential Hilbert space through the idea of low-rank approximation (Stoudenmire & Schwab, 2016). Based on the training approach of deep learning, Tensor networks just need to approximate the dependent space by training the decomposed parameter space. In the field of natural language, the TSLM (Zhang et al., 2019) model proves that the theoretical model obtained by tensor decomposition of the high-dimensional semantic space has a stronger ability to capture semantic dependence than the RNNs network. The u-MPS (Zauner-Stauber et al., 2018) tensor network based on Tensor-Train decomposition (Oseledets, 2011) is also effective for probabilistic sequence modeling. Tensor networks have natural advantages in the field of modeling high-dimensional dependent spaces (Miller et al., 2021). However, due to the gradient problem, the existing tensor network modeling schemes are still difficult to deal with real-world sequence data with long periods.
Euler Discretization and Residual Connection The residual connection method can solve the problem of the gradient disappearance of the deep network (He et al., 2016). When the gradient is saturated, it can maintain the stability of the network according to the principle of identity mapping. Based on the theory of differential dynamics, the residual connection can be considered as a discretization application of the first-order Euler solver (Chen et al., 2018). Higher-order solvers (for example, the second-order Runge-Kutta) reduce the approximation error caused by the identity mapping (Zhu et al., 2019; Li et al., 2021), and also have better ability to process continuously sampled data (Demeester, 2020).
3 PROBLEM STATEMENT
In this paper, we denote the multivariate time series task as follows. Formally, given the time series signal of T points of time X = {y1,y2, · · · ,yT }>, where yt = {vt,1, vt,2, · · · , vt,D}> ∈ RD, T is the number of time steps andD is the number of variables. We leverage X ∈ RT×D as time series history to forecast the series of next time stamp yT+h, where h ≥ 0 and it is the desirable horizon which is ahead of the current time stamp. It is worth mentioning that the range of the prediction task h varies on different tasks.
The main purpose of this paper is to model the effective variable dependency in latent space. Hence, we formulate the variable space as shown in Definition 1. Definition 1. The tensor product operator is adopted to fully interact with the variable series yt indexed by different timestamps. Taking y1 and y2 as an example, the interactive function is:
Ty1,y2 = y1 ⊗ y2 = v1,1v2,1 · · · v1,Dv2,1... . . . ... v1,1v2,D · · · v1,Dv2,D (1) where Ty1,y2 ∈ RD×D, and we define the whole variable space T ∈ RD T as:
Ty1,··· ,yT = y1 ⊗ y2 ⊗ · · · ⊗ yT (2)
In the model architecture, we will detail how to model the proposed variable space.
4 MODEL ARCHITECTURE
In this section, we introduce the proposed MVSRTN architecture as shown in Figure 2. The SeriesVariable Encoder is utilized to optimize the original series data. The N-Order Residual Tensor
Network approximates the variable space, captures the dynamic dependence between variables over time, and ensures the ability to model long-term sequences. We leverage Skip-Connection Layer components to memory coding and scale information. Finally, the loss function exploited by MVSRTN is mentioned.
4.1 SERIES-VARIABLE ENCODER
The Series-variable Encoder is the first part of MVSRTN, in this component, 1D CNN layer is first exploited to extract short-term patterns in the time dimension as well as local dependencies between variables:
ct = GeLU(WC ∗ yt + bC) (3)
where ∗ represents convolution operation, WC ∈ RkD×d and bC ∈ Rd are all the weight parameters, d is the number of filters, k is the kernel size, and GeLU is an activation function (Hendrycks & Gimpel, 2016). After the above operations and zero-padding, the new sequence representation C = {c1, · · · , cT }> ∈ RT×d can be obtained. In addition, the proposed model gains the representation that enriches the global interaction information by self-attention mechanism at series level and variable level, denoted as G ∈ RT×d,
GS = Softmax(Tril( QSK > S√
d ))VS
GV = Softmax( QV K > V√
T )VV
G = ReLU(Concat(GS ;G > V )WG + bG)
(4)
where GS is the output of series-level self-attention, QS = CW Q S , KS = CW K S , VS = CW V S , and WS ∈ Rd×d are all weight parameters. GV is from variable-level self-attention, QV = C>WQV , KV = C >WKV , VV = C >W VV , and WV ∈ RT×T . Moreover, ReLU is an activation function (Nair & Hinton, 2010), Softmax means normalization, Tril refer to lower triangular matrix and WG ∈ R2d×d. A gate structure is leveraged to fuse information :
X ′ = Sigmoid(α)×C + Sigmoid(β)×G (5)
where α and β are all scale parameters, and Sigmoid is an function (Leshno et al., 1993) commonly used as gates. Finally, we achieve the encoded high-quality multivariate time series, and then map it to the latent variable space.
4.2 N-ORDER RESIDUAL TENSOR NETWORK
In this section, we first give a theoretical analysis of the approximate variable space of the tensor network, and secondly, introduce the algorithm flow of the component.
In the theoretical analysis part, we build the encoded series X ′ = {x1, · · · ,xT }> ∈ RT×d into a variable space. Compared to the Definition 1, based on the time series history X ′, we represent the final variable space as follows:
Tx1,··· ,xT = x1 ⊗ x2 ⊗ · · · ⊗ xT (6)
In order to learn the effective interactive information in the variable space T , we decompose its representation into a tensor networkA that is contracted from T parameter tensorsW(t) ∈ Rrt−1×rt×d:
A =W(1) · · · W(T )
= r1∑ h1=1 r2∑ h2=1 · · · rT−1∑ hT−1=1 Wh0h1x1 W h1h2 x2 · · ·W hT−1hT xT
(7)
where xt ∈ Rd and ht ∈ Rrt refer to the indexes ofW(t), and the dimension of the index ht is the truncated rank of the tensor. This set of values [rt] determines the degree to which A approximates the variable space T . is tensor contraction, which can be understood as the dot product between tensors. As shown in Eq.(7), tensor contraction is essentially an addition operation on indexes with the same dimensions to eliminate this index.
It is worth noting thatA and T are equal when the error is allowed. The above theory is demonstrated in Claim 1, which illustrates that the decomposed parameter space A has the ability to model the variable space. Claim 1. The tensor network A is obtained by tensor-train decomposition of the variable space T . When the rank of the parameter tensors in Eq.(7) is not truncated, A=T , and their complexity are all equal to O(dT ) ; When the following inequalities are satisfied, for the prescribed accuracy , A can be considered as the optimal approximation of the space T .
‖T − A‖F ≤ ‖T ‖F (8)
The above is the theoretical discussion of modeling variable space through the idea of low-rank approximation. From a practical point of view, updating the tensor network through automatic differentiation is an effective way to reduce its loss with the variable space. Next, we give the specific modeling process of the N-Order Residual Tensor Network component.
For ensuring the probability normalization of the tensor network input (Stoudenmire & Schwab, 2016), L2 regularization is performed on the sequence vector xt in X ′ at first:
φ(xt) = L2(xt) (9)
As shown in Figure 3(a), the normalized series φ(xt) is condensed with the network to forecast the future distribution of variables. For avoiding confusion in the position of different timestamps, the network is modeled step by step in the direction of time, and the parameter tensors
are shared (Zauner-Stauber et al., 2018), namelyW(1) = · · · =W(T ) =W ∈ Rr×r×d:
ht = A(φ(xt),ht−1;W) = ht W φ(xi), t ∈ [1, · · · , T ]
(10)
where ht ∈ Rr, h0 refers to all one vector, refers to tensor contraction and r is set as a hyperparameter bond-dimension (which is adopted to control the ability to approximate the variable space of the tensor network, and the upper limit is d T 2 ).
For long-term sequences, only the space-approximated tensor network will result in gradient problems. In response to the issue, we propose the N-order residual connection approach. First, compared with the tensor network structure mentioned above, we add the residual term when modeling the series information of the current time stamp (as shown in Figure 3(b)):
ht = H(φ(xt),ht−1;W) = ht−1 + A(φ(xt),ht−1;W), t ∈ [1, . . . , T ]
(11)
Inspired by the numerical solution of ordinary differential equations, its high-order solvers have lower truncation error and higher convergence stability, we propose a tensor network combined with the N-order residual connection as shown in Figure 3(c):
ht = H(φ(xt),ht−1;W)
= ht−1 + N∑ j=1 ajzj , t ∈ [1, . . . , T ] (12)
{ zj = A(φ(xt),ht−1;W) j = 1 zj = A(φ(xt) + bj ,xt−1 + bjzj−1;W) 2 ≤ j ≤ N
where aj and bj are all fixed coefficients. Please see the Appendix A for the coefficient values calculated by the Taylor’s Formula of the different order residual connections.
xU = hTWU + bU (13)
Finally, a linear component maps hT ∈ Rr to xU ∈ RD, where WU ∈ Rr×D and bU are all parameter weights.
4.3 SKIP CONNECTION LAYER
The layer is added after the input layer and encoder layer respectively, which directly connected to the Output Layer. For multivariate time series, numerous nonlinear structures in the model make the output scale insensitive to the input scale (Lai et al., 2018), resulting in a significant reduction in the prediction accuracy of the model. In addition, in order to memorize the encoder states, we also adopt the layer to directly propagate the encoding information to the output.
xL = LWL + bL (14)
the known input data is X = {y1,y2, · · · ,yT }> ∈ RT×D, and the selected series is L = {yT−l+1, · · · ,yT } ∈ RD×l, WL ∈ Rl×1 and bL ∈ R1 are all weight parameters.
xM = (MWM + bM )WO + bO (15)
the selected series is M = {xT−m+1, · · · ,xT } ∈ Rd×m, where the encoder series is X ′ = {x1,x2, · · · ,xT }> ∈ RT×d, and weight parameters are WM ∈ Rm×1, bM ∈ R1, WO ∈ Rd×D and bO ∈ RD.
4.4 OUTPUT LAYER AND LOSS FUNCTION
Finally, the prediction result at time stamp s = T + h of the network is Y ′s ∈ RD:
Y ′s = x U + xL + xM (16)
Moreover, we utilize the validation set to decide which of the two loss functions is better for our model. we leverage Mean Absolute Error (MAE, also denoted as L1 loss function) and Mean Square
Error (MSE, also denoted as L2 loss function) to update the network parameters. MSE is simple to calculate, which is the default loss function of many prediction tasks, and MAE has better robustness for outliers.
MSE = ∑
s∈Ωtrain D∑ i=1 (Ys,i − Y ′s,i)2
MAE = ∑
s∈Ωtrain D∑ i=1 |Ys,i − Y ′s,i|
(17)
where Ωtrain is a set of time stamps for model training.
5 EXPERIMENTS
In the section, we perform extensive experiments on four multivariate time series forecasting benchmark datasets between MVSRTN and seven baselines. Moreover, in order to evaluate the effectiveness of the proposed model, we conduct ablation experiments and order experiments.
5.1 DATASETS AND BASELINES
Our experiments are based on three datasets which are publicly available:
• Traffic 1: This dataset is composed of 48 months hourly data from the California Department of Transportation during 2015 and 2016. The data describes the road occupancy rates (between 0 and 1) measured by different sensors on San Francisco Bay area freeways.
• Exchange-Rate: A dataset that consist of the daily exchange rates of eight countries including Australia, British, Canada, Switzerland, China, Japan, New Zealand and Singapore ranging from 1990 to 2016.
• Solar-Energy 2: This dataset consists of the solar power production records in the year of 2006 and these records are sampled every 10 minutes from 137 PV plants in Alabama State.
• Electricity 3: This dataset from the UCI Machine Learning Repository is composed of the electricity consumption in kWh which was recorded every 15 minutes from 2012 to 2014 for 321 clients.
We compare with a number of forecasting methods. AR: An auto-regressive model for time series forecasting. GP (Frigola, 2015; Roberts et al., 2012): A Gaussian Process time series model for prediction. LSTNet-skip (Lai et al., 2018): A LSTNet model (A deep neural network that combines convolutional neural networks and recurrent neural networks, and could catch the long-term and short-term patterns of time series) with skip-RNN layer. LSTNet-Attn (Lai et al., 2018): A LSTNet model with temporal attention layer. TPA-LSTM (Shih et al., 2019): An attention-recurrent neural network for multivariate time series forecasting. MTGNN (Wu et al., 2020): A graph neural network model for the forecasting of multivariate time series. MTGNN-sampling (Wu et al., 2020): MTGNN model trained on a sampled subset of a graph in each iteration.
All methods are evaluated with two metrics: root relative squared (RSE) and empirical correlation Coefficient (CORR). Lower value is better for RSE while higher value is better for CORR. In addition, the details of the datasets and metrics are in Appendix B.
5.2 TRAINING DETAILS
For all tasks, we implement our model with Pytorch-1.20, and train them on a NVIDIA GeForce RTX 2080 Ti GPU. For Traffic and Solar datasets, the size of the series history window T is 168, for Solar-Energy it is 144, and for Exchange-rate it is 120. All the weight parameters are initialized with Xavier (Glorot & Bengio, 2010) and normalized by weight normalization (Salimans &
1http://pems.dot.ca.gov 2http://www.nrel.gov/grid/solar-power-data.html 3https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014
Kingma, 2016). As for learning method, we use the Adam optimizer (Kingma & Ba, 2014) and an exponentially decaying learning rate with a linear warm up. The initial learning rate is set from 0.0005 to 0.001 and the batch size is 128. We perform dropout after each layer, except input and output ones, and the rate usually is set to 0.1 or 0.2. The 1D CNN component has 50 filters, the filter size is set from 3 to 6. In the tensor network, the bond-dimension r is tuned range from 50 to 100, and the order of the residual connection is chosen from {1,2,4}. For the chosen size l and m in Skip-Connection Layer, the l is 24 for all datasets. and the m is set to the history window size T for different datasets.
5.3 MAIN RESULTS
Table 1 shows the prediction performances of all the models on all datasets, and the metrics are RSE and CORR. We set horizon = {6, 12}, respectively, which means the horizons are set to 6 or 24 days over the Exchange-Rate dataset, to 6 or 24 hours over the Electricity and Traffic task, to 60 or 240 minutes over the Solar-Energy data. It is worth noting that the larger the horizons, the harder the prediction tasks. The best result for each metric of different datasets is highlighted in bold face in the table.
Compared with the MTGNN model (state-of-the-art method), our proposed model achieves competitive results. In particular, for the exchange-rate dataset, the results of MVSRTN on the four (horizon, metric) pairs all outperform the MTGNN model. For the Solar-Energy data when horizon=24, MVSRTN lowers down RSE by 2.46%. In general, in addition to the traffic dataset, our method has a greater advantage when the horizon is 24. The reasons are that the graph modeling method of MTGNN is more suitable for traffic data, and when the series data is difficult to predict (horizon=24), our model plays the role of modeling semantic space.
5.4 ABLATION EXPERIMENTS
To demonstrate the effectiveness of each model structure on Solar-Energy, Traffic, Electricity and Exchange-Rate data, we compare MVSRTN with 4 variants as follows:
• MVSRTN-nA: We remove the self-attention at series and variable level component such that there is a lack of global interactive information at series level and variable level.
As shown in Figure 4, reported results are the average of 10 runs. For all the datasets, removing the residual tensor networks and removing the skip-connection layer both cause significant performance degradation. In particular, for the CORR metric of the Exchange-Rate dataset, the MVSRTN-nS variant basically loses the ability to accurately forecast, which means the exchange rate data is more sensitive to scale information. Moreover, solar energy and electricity data are more obviously affected by residual tensor network components.
5.5 ORDER ANALYSIS
In order to verify the effects of different orders of the residual tensor network, as shown in Figure 5, we design the comparative experiments over the orders of 1, 2, 4 on the Solar-Energy, Traffic, Electricity and Exchange-Rate datasets. The experimental results show that the second-order residual tensor network (RTN) achieves the best results on both RSE and CORR metrics for solar energy data and exchange rate data. For traffic data, the effect of fourth-order RTN is better than that of first-order and second-order. In addition, the result of the fourth-order RTN is similar to the secondorder RTN on the Electricity dataset. This verifies the effectiveness of the proposed higher-order residual connection.
6 CONCLUSION
In this paper, we propose a novel framework (MVSRTN) for fully exploiting the dependencies in the variable space in the task of multivariate time series forecasting. We exploit a tensor network based on the idea of low-rank approximation to model the variable space and shared tensor components to ensure the translation invariance of the network. Additionly, we propose an N-order residual connection approach and couple it to the space-approximated tensor network to improve the ability to model long-term sequences. Moreover, a series-variable encoder is designed to improve the quality of the variable space, and we leverage the skip-connection layer to achieve the dissemination of information such as scale. According to the comparison with 7 baselines, we show the efficiency of the framework of MVSRTN , and experimental results show the effect of modeling variable space for multivariate time series forecasting.
A THE COEFFICIENT VALUES OF RESIDUAL TENSOR NETWORK
Table 2 shows the coefficients used in Equation 4.2.
B DATASETS AND METRICS
In Table 3, we summarize statistics of benchmark datasets. Moreover, two conventional evaluation metrics exploited by us are defined as follows:
Root Relative Squared Error (RSE):
RSE =
√∑ s∈Ωtest(Ys,i − Y ′ s,i)
2√∑ s∈Ωtest(Ys,i −mean(Y )) 2 (18)
Empirical Correlation Coefficient (CORR):
CORR = 1
n n∑ i=1 ∑ s(Ys,i −mean(Yi))(Y ′s,i −mean(Y ′i ))√∑ s(Ys,i −mean(Yi))2(Y ′s,i −mean(Y ′i ))2
(19)
where Ωtest is a set of time stamps for model test, and lower value is better for RSE while higher value is better for CORR. | 1. What is the main contribution of the paper regarding multivariate time series modeling?
2. What are the strengths of the proposed framework, particularly in combining self-attention encoder and TNs?
3. Do you have any concerns or questions regarding the notation used in the paper, such as in Section 4.3?
4. How does the proposed N-order residual connections in TN improve the performance of traditional TNs?
5. Can you explain why skip connection works for TN and how the depth of TN affects the performance?
6. How does the paper's contribution differ from other works concerning TNs for time series and residual TNs, such as [1], [2], and [3]?
7. Is Claim 1 still valid when the core tensors \mathcal{W} are equal, which is a uniform MPS? If not, is Claim 1 necessary for the paper?
8. What is the significance of the ablation study conducted in Section 5.4 and 5.5? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed the MVSRTN architecture for multivariate time series modeling. The MVSRTN consists of an encoder to extract latent variables and residual tensor network (TN) blocks to capture the interactions in the latent space.
The main contribution of the model is the TN block part. In particular, the authors used tensor-products to fully represent the latent variable space (Eq 2). Then, they proposed an N-order residual TN block to alleviate potential gradient problems of high-order TNs in long-term time series. They conducted experiments on four multivariate time series data for prediction. Moreover, an ablation study shows the effectiveness of the proposed residual TN blocks.
Combining ResNet and TN is an interesting direction. However, I think this paper did not present this problem sufficiently and some notations are confusing.
Review
Strength
-The proposed framework combines a self-attention encoder and TNs to fully capture the complex correlations. The expressive power of TNs has potential in modeling complex data. -The proposed N-order residual connections in TN improve the performance of traditional TNs. -The ablation study shows the effectiveness of using TN blocks and skip connections in TNs (Section 5.4 and 5.5).
Weakness
-There are some works concerning TNs for time series and residual TNs. For example, [1] constructed variable spaces using tensor-products. [2] used tensor-products to capture long-term dependencies. [3] discussed residual MPS. The authors did not mention them in the related works.
[1]. Toth, C., Bonnier, P., & Oberhauser, H. (2020). Seq2tens: An efficient representation of sequences by low-rank tensor projections. arXiv preprint arXiv:2006.07027. [2]. Yu, R., Zheng, S., Anandkumar, A., & Yue, Y. (2017). Long-term forecasting using tensor-train rnns. Arxiv. [3]. Meng, Y. M., Zhang, J., Zhang, P., Gao, C., & Ran, S. J. (2020). Residual Matrix Product State for Machine Learning. arXiv preprint arXiv:2012.11841.
-In page 6, the authors claimed that TN suffers from gradient problems, so they proposed the N-order residual TN to address this issue. However, ResNet was proposed to solve the degradation problem. Moreover, the architecture of DNN is quite different from TN. So I think the authors should provide more discussions about why skip connection works for TN and experiments about how the depth of TN affects the performance.
-Claim 1 seems to be trivial. Most tensor decompositions without constraints satisfy Eq 8. Moreover, in Eq 10, the authors assume the core tensors \mathcal{W} are equal, which is a uniform MPS. Does Claim 1 still work for this case? If not, Claim 1 seems redundant for this paper. The notation is a little bit confusing, especially in Section 4.3. Besides, \mathcal{T} in Eq 6 represents the variable space of
x
, and \mathcal{A} should be the coefficients according to Eq 10. Why does \mathcal{A} equal to \mathcal{T}? |
ICLR | Title
Modeling Variable Space with Residual Tensor Networks for Multivariate Time Series
Abstract
Multivariate time series involve a series of valuable applications in the real world, and the basic premise of which is that multiple variables are interdependent. However, the relationship between variables in the latent space is dynamic and complex, and as the time window increases, the size of the space also increases exponentially. For fully exploiting the dependencies in the variable space, we propose Modeling Variable Space with Residual Tensor Networks (MVSRTN) for multivariate time series. In this framework, we derive the mathematical representation of the variable space, and then use a tensor network based on the idea of low-rank approximation to model the variable space. The tensor components are shared to ensure the translation invariance of the network. In order to improve the ability to model long-term sequences, we propose an N-order residual connection approach and couple it to the space-approximated tensor network. Moreover, the seriesvariable encoder is designed to improve the quality of the variable space, and we use the skip-connection layer to achieve the dissemination of information such as scale. Experimental results verify the effectiveness of our proposed method on four multivariate time series forecasting benchmark datasets.
1 INTRODUCTION
Multivariate time series are ubiquitous in the real world, such as energy collection and consumption (Rangapuram et al., 2018), traffic occupancy (Lai et al., 2018), financial market trends (Shih et al., 2019), and aerospace data (Zhao et al., 2020). They mainly consist of sensor data sampled on multiple variables at different timestamps. In order to accurately forecast future time series, it is necessary to learn the laws and the dynamic interaction information between variables that exist in the series history (Wu et al., 2020; Xu et al., 2020; Cheng et al., 2020).
For illustrating the dependencies in the multivariate time series, we focus on examples through case study. In Figure 1(a), a certain variable itself has periodic laws, and the dependency relationship in this respect can be well captured by the short-term and long-term model of the modeling sequence. As shown in Figure 1(b), there are dynamic dependency relations between the variables. The “arrow” indicates the trend of data changes. In 2011, the “Switzerland” variable changed significantly earlier than the other in 2013, the trends of the three variables occurred almost simultaneously.
In literature, there have been some approaches to capture interactions in series history from different perspectives. research utilizing traditional deep learning components such as CNNs, RNNs, and attention mechanisms are more inclined to learn long-term and short-term sequence states, and fail to make full use of the entanglement between variables in the latent space (Lai et al., 2018; Shih et al., 2019; Cheng et al., 2020). New mathematical tools such as graph neural networks and tensor calculation methods have inspired the research on the spatial Relations in multivariate time series. (Wu et al., 2020) uses graph neural network to extract one-way relationships between variable pairs in the end-to-end framework. The TLASM (Xu et al., 2020) adopts tensor calculation to model the trend in the multivariate time series, and combines the core tensor components into the LSTM.
Nevertheless, most of the above-mentioned schemes for explicitly modeling spatio-temporal dependencies tend to bulid low-dimensional variable spaces, but this is still incomplete for the variable spaces shown in Figure 1(c) of. In other words, in the high-dimensional variable space evolved over time, the ideal state for multivariate time series is that the model can capture the effective relationship between variables as fully as possible. To address the above challenges, we first define
the high-dimensional variable space mathematically through the tensor product operator. Then, the proposed MVSRTN model models the variable space of multivariate time series.
In the proposed scheme, N-Order Residual Tensor Network is the core component. The tensor network is based on the idea of low-rank approximation, which refers to the mode of decomposing and truncating the extremely large dependent space through tensor decomposition methods (Ma et al., 2019; Zhang et al., 2019) in the case of a certain error. For modeling the exponential variable space, we first adopt the Tensor-Train (Oseledets, 2011) algorithm satisfying the demands of processing different timestamps to decompose the variable space representation, and then obtain the space-approximated tensor network by tensor contraction. However, for long-term sequences, the continuous multiplication in the tensor contraction calculation would result in series of gradient problems. Inspired by ordinary differential equations, we extend and introduce N-order Euler discretization method, and the first-order approach can be thought of as residual connection. Finally, the space-approximated tensor network and the proposed N-order residual connection are coupled to model the effective interdependence in the variable space.
In addition, Series-Variable Encoder aims at improving the quality of the variable space by reintegrating the time series. In this component, the combination of 1D CNN and self-attention mechanism at series level and variable level effectively enrich the dependency information mapped to the latent space. For making MVSRTN more robust, we use the linear Skip-Connection Layer to directly propagate the input scale and coding scale information to the Output Layer. Our contributions are of three-folds:
• We represent the high-dimensional variable space, and through theoretical analysis, we demonstrate the space-approximated tensor network has the ability to model the latent variable space.
• We propose the MVSRTN. In this architecture, a designed encoder combines long-term and short-term interactive information to optimize the variable space, a tensor network coupled by N-order residual connection captures the dynamically changing dependencies, and an additional layer propagates the scale features.
• We perform extensive experiments on four benchmark datasets and the results demonstrate the effectiveness of MVSRTN in evaluation metrics.
2 RELATED WORK
Multivariate Time Series In the real world, a large number of multivariate time series tasks are derived from the fields of finance, economy, and transportation. The purpose of this type of task is to capture or predict the trend of a set of variables that change dynamically over time. In the research history of multivariate time series, researchers have reached a basic consensus that multiple variables are interdependent. After a period of development, statistical machine learning models such as Vector Auto-Regressive (Box et al., 2015), Gaussian Process (Frigola, 2015) algorithm can linearly model multivariate time series data, but the complexity of the model expands dramatically with the increase of time or variables. In recent years, with the rise of deep learning, neural networks implicitly capture the dependence of data in a non-linear mode. LSTNet (Lai et al., 2018) uses CNN and RNN to extract the local-term dependent mode of the time series, and extracts the long-term
mode by recurrent-skip layer. Based on the attention mechanism and the LSTNet model, the TPALSTM (Shih et al., 2019) model can capture long-term dependencies across multiple time steps. However, algorithms based on neural networks only explicitly model the interaction in the time dimension, and fail to fully analyze the relationships in the latent variable space. The MTCNN (Wu et al., 2020) model based on graph neural network technology explicitly extracts the relationship between variables and achieves the effect of SOTA. However, only the one-way relationship between variable pairs is processed, which is not enough for the dynamic and complex variable space. Therefore, we consider using the idea of approximation to model the variable space.
Tensor Network for Dependent Space The tensor network is a powerful mathematical modeling framework originated from many-body physics, which learns the effective associations in the exponential Hilbert space through the idea of low-rank approximation (Stoudenmire & Schwab, 2016). Based on the training approach of deep learning, Tensor networks just need to approximate the dependent space by training the decomposed parameter space. In the field of natural language, the TSLM (Zhang et al., 2019) model proves that the theoretical model obtained by tensor decomposition of the high-dimensional semantic space has a stronger ability to capture semantic dependence than the RNNs network. The u-MPS (Zauner-Stauber et al., 2018) tensor network based on Tensor-Train decomposition (Oseledets, 2011) is also effective for probabilistic sequence modeling. Tensor networks have natural advantages in the field of modeling high-dimensional dependent spaces (Miller et al., 2021). However, due to the gradient problem, the existing tensor network modeling schemes are still difficult to deal with real-world sequence data with long periods.
Euler Discretization and Residual Connection The residual connection method can solve the problem of the gradient disappearance of the deep network (He et al., 2016). When the gradient is saturated, it can maintain the stability of the network according to the principle of identity mapping. Based on the theory of differential dynamics, the residual connection can be considered as a discretization application of the first-order Euler solver (Chen et al., 2018). Higher-order solvers (for example, the second-order Runge-Kutta) reduce the approximation error caused by the identity mapping (Zhu et al., 2019; Li et al., 2021), and also have better ability to process continuously sampled data (Demeester, 2020).
3 PROBLEM STATEMENT
In this paper, we denote the multivariate time series task as follows. Formally, given the time series signal of T points of time X = {y1,y2, · · · ,yT }>, where yt = {vt,1, vt,2, · · · , vt,D}> ∈ RD, T is the number of time steps andD is the number of variables. We leverage X ∈ RT×D as time series history to forecast the series of next time stamp yT+h, where h ≥ 0 and it is the desirable horizon which is ahead of the current time stamp. It is worth mentioning that the range of the prediction task h varies on different tasks.
The main purpose of this paper is to model the effective variable dependency in latent space. Hence, we formulate the variable space as shown in Definition 1. Definition 1. The tensor product operator is adopted to fully interact with the variable series yt indexed by different timestamps. Taking y1 and y2 as an example, the interactive function is:
Ty1,y2 = y1 ⊗ y2 = v1,1v2,1 · · · v1,Dv2,1... . . . ... v1,1v2,D · · · v1,Dv2,D (1) where Ty1,y2 ∈ RD×D, and we define the whole variable space T ∈ RD T as:
Ty1,··· ,yT = y1 ⊗ y2 ⊗ · · · ⊗ yT (2)
In the model architecture, we will detail how to model the proposed variable space.
4 MODEL ARCHITECTURE
In this section, we introduce the proposed MVSRTN architecture as shown in Figure 2. The SeriesVariable Encoder is utilized to optimize the original series data. The N-Order Residual Tensor
Network approximates the variable space, captures the dynamic dependence between variables over time, and ensures the ability to model long-term sequences. We leverage Skip-Connection Layer components to memory coding and scale information. Finally, the loss function exploited by MVSRTN is mentioned.
4.1 SERIES-VARIABLE ENCODER
The Series-variable Encoder is the first part of MVSRTN, in this component, 1D CNN layer is first exploited to extract short-term patterns in the time dimension as well as local dependencies between variables:
ct = GeLU(WC ∗ yt + bC) (3)
where ∗ represents convolution operation, WC ∈ RkD×d and bC ∈ Rd are all the weight parameters, d is the number of filters, k is the kernel size, and GeLU is an activation function (Hendrycks & Gimpel, 2016). After the above operations and zero-padding, the new sequence representation C = {c1, · · · , cT }> ∈ RT×d can be obtained. In addition, the proposed model gains the representation that enriches the global interaction information by self-attention mechanism at series level and variable level, denoted as G ∈ RT×d,
GS = Softmax(Tril( QSK > S√
d ))VS
GV = Softmax( QV K > V√
T )VV
G = ReLU(Concat(GS ;G > V )WG + bG)
(4)
where GS is the output of series-level self-attention, QS = CW Q S , KS = CW K S , VS = CW V S , and WS ∈ Rd×d are all weight parameters. GV is from variable-level self-attention, QV = C>WQV , KV = C >WKV , VV = C >W VV , and WV ∈ RT×T . Moreover, ReLU is an activation function (Nair & Hinton, 2010), Softmax means normalization, Tril refer to lower triangular matrix and WG ∈ R2d×d. A gate structure is leveraged to fuse information :
X ′ = Sigmoid(α)×C + Sigmoid(β)×G (5)
where α and β are all scale parameters, and Sigmoid is an function (Leshno et al., 1993) commonly used as gates. Finally, we achieve the encoded high-quality multivariate time series, and then map it to the latent variable space.
4.2 N-ORDER RESIDUAL TENSOR NETWORK
In this section, we first give a theoretical analysis of the approximate variable space of the tensor network, and secondly, introduce the algorithm flow of the component.
In the theoretical analysis part, we build the encoded series X ′ = {x1, · · · ,xT }> ∈ RT×d into a variable space. Compared to the Definition 1, based on the time series history X ′, we represent the final variable space as follows:
Tx1,··· ,xT = x1 ⊗ x2 ⊗ · · · ⊗ xT (6)
In order to learn the effective interactive information in the variable space T , we decompose its representation into a tensor networkA that is contracted from T parameter tensorsW(t) ∈ Rrt−1×rt×d:
A =W(1) · · · W(T )
= r1∑ h1=1 r2∑ h2=1 · · · rT−1∑ hT−1=1 Wh0h1x1 W h1h2 x2 · · ·W hT−1hT xT
(7)
where xt ∈ Rd and ht ∈ Rrt refer to the indexes ofW(t), and the dimension of the index ht is the truncated rank of the tensor. This set of values [rt] determines the degree to which A approximates the variable space T . is tensor contraction, which can be understood as the dot product between tensors. As shown in Eq.(7), tensor contraction is essentially an addition operation on indexes with the same dimensions to eliminate this index.
It is worth noting thatA and T are equal when the error is allowed. The above theory is demonstrated in Claim 1, which illustrates that the decomposed parameter space A has the ability to model the variable space. Claim 1. The tensor network A is obtained by tensor-train decomposition of the variable space T . When the rank of the parameter tensors in Eq.(7) is not truncated, A=T , and their complexity are all equal to O(dT ) ; When the following inequalities are satisfied, for the prescribed accuracy , A can be considered as the optimal approximation of the space T .
‖T − A‖F ≤ ‖T ‖F (8)
The above is the theoretical discussion of modeling variable space through the idea of low-rank approximation. From a practical point of view, updating the tensor network through automatic differentiation is an effective way to reduce its loss with the variable space. Next, we give the specific modeling process of the N-Order Residual Tensor Network component.
For ensuring the probability normalization of the tensor network input (Stoudenmire & Schwab, 2016), L2 regularization is performed on the sequence vector xt in X ′ at first:
φ(xt) = L2(xt) (9)
As shown in Figure 3(a), the normalized series φ(xt) is condensed with the network to forecast the future distribution of variables. For avoiding confusion in the position of different timestamps, the network is modeled step by step in the direction of time, and the parameter tensors
are shared (Zauner-Stauber et al., 2018), namelyW(1) = · · · =W(T ) =W ∈ Rr×r×d:
ht = A(φ(xt),ht−1;W) = ht W φ(xi), t ∈ [1, · · · , T ]
(10)
where ht ∈ Rr, h0 refers to all one vector, refers to tensor contraction and r is set as a hyperparameter bond-dimension (which is adopted to control the ability to approximate the variable space of the tensor network, and the upper limit is d T 2 ).
For long-term sequences, only the space-approximated tensor network will result in gradient problems. In response to the issue, we propose the N-order residual connection approach. First, compared with the tensor network structure mentioned above, we add the residual term when modeling the series information of the current time stamp (as shown in Figure 3(b)):
ht = H(φ(xt),ht−1;W) = ht−1 + A(φ(xt),ht−1;W), t ∈ [1, . . . , T ]
(11)
Inspired by the numerical solution of ordinary differential equations, its high-order solvers have lower truncation error and higher convergence stability, we propose a tensor network combined with the N-order residual connection as shown in Figure 3(c):
ht = H(φ(xt),ht−1;W)
= ht−1 + N∑ j=1 ajzj , t ∈ [1, . . . , T ] (12)
{ zj = A(φ(xt),ht−1;W) j = 1 zj = A(φ(xt) + bj ,xt−1 + bjzj−1;W) 2 ≤ j ≤ N
where aj and bj are all fixed coefficients. Please see the Appendix A for the coefficient values calculated by the Taylor’s Formula of the different order residual connections.
xU = hTWU + bU (13)
Finally, a linear component maps hT ∈ Rr to xU ∈ RD, where WU ∈ Rr×D and bU are all parameter weights.
4.3 SKIP CONNECTION LAYER
The layer is added after the input layer and encoder layer respectively, which directly connected to the Output Layer. For multivariate time series, numerous nonlinear structures in the model make the output scale insensitive to the input scale (Lai et al., 2018), resulting in a significant reduction in the prediction accuracy of the model. In addition, in order to memorize the encoder states, we also adopt the layer to directly propagate the encoding information to the output.
xL = LWL + bL (14)
the known input data is X = {y1,y2, · · · ,yT }> ∈ RT×D, and the selected series is L = {yT−l+1, · · · ,yT } ∈ RD×l, WL ∈ Rl×1 and bL ∈ R1 are all weight parameters.
xM = (MWM + bM )WO + bO (15)
the selected series is M = {xT−m+1, · · · ,xT } ∈ Rd×m, where the encoder series is X ′ = {x1,x2, · · · ,xT }> ∈ RT×d, and weight parameters are WM ∈ Rm×1, bM ∈ R1, WO ∈ Rd×D and bO ∈ RD.
4.4 OUTPUT LAYER AND LOSS FUNCTION
Finally, the prediction result at time stamp s = T + h of the network is Y ′s ∈ RD:
Y ′s = x U + xL + xM (16)
Moreover, we utilize the validation set to decide which of the two loss functions is better for our model. we leverage Mean Absolute Error (MAE, also denoted as L1 loss function) and Mean Square
Error (MSE, also denoted as L2 loss function) to update the network parameters. MSE is simple to calculate, which is the default loss function of many prediction tasks, and MAE has better robustness for outliers.
MSE = ∑
s∈Ωtrain D∑ i=1 (Ys,i − Y ′s,i)2
MAE = ∑
s∈Ωtrain D∑ i=1 |Ys,i − Y ′s,i|
(17)
where Ωtrain is a set of time stamps for model training.
5 EXPERIMENTS
In the section, we perform extensive experiments on four multivariate time series forecasting benchmark datasets between MVSRTN and seven baselines. Moreover, in order to evaluate the effectiveness of the proposed model, we conduct ablation experiments and order experiments.
5.1 DATASETS AND BASELINES
Our experiments are based on three datasets which are publicly available:
• Traffic 1: This dataset is composed of 48 months hourly data from the California Department of Transportation during 2015 and 2016. The data describes the road occupancy rates (between 0 and 1) measured by different sensors on San Francisco Bay area freeways.
• Exchange-Rate: A dataset that consist of the daily exchange rates of eight countries including Australia, British, Canada, Switzerland, China, Japan, New Zealand and Singapore ranging from 1990 to 2016.
• Solar-Energy 2: This dataset consists of the solar power production records in the year of 2006 and these records are sampled every 10 minutes from 137 PV plants in Alabama State.
• Electricity 3: This dataset from the UCI Machine Learning Repository is composed of the electricity consumption in kWh which was recorded every 15 minutes from 2012 to 2014 for 321 clients.
We compare with a number of forecasting methods. AR: An auto-regressive model for time series forecasting. GP (Frigola, 2015; Roberts et al., 2012): A Gaussian Process time series model for prediction. LSTNet-skip (Lai et al., 2018): A LSTNet model (A deep neural network that combines convolutional neural networks and recurrent neural networks, and could catch the long-term and short-term patterns of time series) with skip-RNN layer. LSTNet-Attn (Lai et al., 2018): A LSTNet model with temporal attention layer. TPA-LSTM (Shih et al., 2019): An attention-recurrent neural network for multivariate time series forecasting. MTGNN (Wu et al., 2020): A graph neural network model for the forecasting of multivariate time series. MTGNN-sampling (Wu et al., 2020): MTGNN model trained on a sampled subset of a graph in each iteration.
All methods are evaluated with two metrics: root relative squared (RSE) and empirical correlation Coefficient (CORR). Lower value is better for RSE while higher value is better for CORR. In addition, the details of the datasets and metrics are in Appendix B.
5.2 TRAINING DETAILS
For all tasks, we implement our model with Pytorch-1.20, and train them on a NVIDIA GeForce RTX 2080 Ti GPU. For Traffic and Solar datasets, the size of the series history window T is 168, for Solar-Energy it is 144, and for Exchange-rate it is 120. All the weight parameters are initialized with Xavier (Glorot & Bengio, 2010) and normalized by weight normalization (Salimans &
1http://pems.dot.ca.gov 2http://www.nrel.gov/grid/solar-power-data.html 3https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014
Kingma, 2016). As for learning method, we use the Adam optimizer (Kingma & Ba, 2014) and an exponentially decaying learning rate with a linear warm up. The initial learning rate is set from 0.0005 to 0.001 and the batch size is 128. We perform dropout after each layer, except input and output ones, and the rate usually is set to 0.1 or 0.2. The 1D CNN component has 50 filters, the filter size is set from 3 to 6. In the tensor network, the bond-dimension r is tuned range from 50 to 100, and the order of the residual connection is chosen from {1,2,4}. For the chosen size l and m in Skip-Connection Layer, the l is 24 for all datasets. and the m is set to the history window size T for different datasets.
5.3 MAIN RESULTS
Table 1 shows the prediction performances of all the models on all datasets, and the metrics are RSE and CORR. We set horizon = {6, 12}, respectively, which means the horizons are set to 6 or 24 days over the Exchange-Rate dataset, to 6 or 24 hours over the Electricity and Traffic task, to 60 or 240 minutes over the Solar-Energy data. It is worth noting that the larger the horizons, the harder the prediction tasks. The best result for each metric of different datasets is highlighted in bold face in the table.
Compared with the MTGNN model (state-of-the-art method), our proposed model achieves competitive results. In particular, for the exchange-rate dataset, the results of MVSRTN on the four (horizon, metric) pairs all outperform the MTGNN model. For the Solar-Energy data when horizon=24, MVSRTN lowers down RSE by 2.46%. In general, in addition to the traffic dataset, our method has a greater advantage when the horizon is 24. The reasons are that the graph modeling method of MTGNN is more suitable for traffic data, and when the series data is difficult to predict (horizon=24), our model plays the role of modeling semantic space.
5.4 ABLATION EXPERIMENTS
To demonstrate the effectiveness of each model structure on Solar-Energy, Traffic, Electricity and Exchange-Rate data, we compare MVSRTN with 4 variants as follows:
• MVSRTN-nA: We remove the self-attention at series and variable level component such that there is a lack of global interactive information at series level and variable level.
As shown in Figure 4, reported results are the average of 10 runs. For all the datasets, removing the residual tensor networks and removing the skip-connection layer both cause significant performance degradation. In particular, for the CORR metric of the Exchange-Rate dataset, the MVSRTN-nS variant basically loses the ability to accurately forecast, which means the exchange rate data is more sensitive to scale information. Moreover, solar energy and electricity data are more obviously affected by residual tensor network components.
5.5 ORDER ANALYSIS
In order to verify the effects of different orders of the residual tensor network, as shown in Figure 5, we design the comparative experiments over the orders of 1, 2, 4 on the Solar-Energy, Traffic, Electricity and Exchange-Rate datasets. The experimental results show that the second-order residual tensor network (RTN) achieves the best results on both RSE and CORR metrics for solar energy data and exchange rate data. For traffic data, the effect of fourth-order RTN is better than that of first-order and second-order. In addition, the result of the fourth-order RTN is similar to the secondorder RTN on the Electricity dataset. This verifies the effectiveness of the proposed higher-order residual connection.
6 CONCLUSION
In this paper, we propose a novel framework (MVSRTN) for fully exploiting the dependencies in the variable space in the task of multivariate time series forecasting. We exploit a tensor network based on the idea of low-rank approximation to model the variable space and shared tensor components to ensure the translation invariance of the network. Additionly, we propose an N-order residual connection approach and couple it to the space-approximated tensor network to improve the ability to model long-term sequences. Moreover, a series-variable encoder is designed to improve the quality of the variable space, and we leverage the skip-connection layer to achieve the dissemination of information such as scale. According to the comparison with 7 baselines, we show the efficiency of the framework of MVSRTN , and experimental results show the effect of modeling variable space for multivariate time series forecasting.
A THE COEFFICIENT VALUES OF RESIDUAL TENSOR NETWORK
Table 2 shows the coefficients used in Equation 4.2.
B DATASETS AND METRICS
In Table 3, we summarize statistics of benchmark datasets. Moreover, two conventional evaluation metrics exploited by us are defined as follows:
Root Relative Squared Error (RSE):
RSE =
√∑ s∈Ωtest(Ys,i − Y ′ s,i)
2√∑ s∈Ωtest(Ys,i −mean(Y )) 2 (18)
Empirical Correlation Coefficient (CORR):
CORR = 1
n n∑ i=1 ∑ s(Ys,i −mean(Yi))(Y ′s,i −mean(Y ′i ))√∑ s(Ys,i −mean(Yi))2(Y ′s,i −mean(Y ′i ))2
(19)
where Ωtest is a set of time stamps for model test, and lower value is better for RSE while higher value is better for CORR. | 1. What is the main contribution of the paper regarding multivariate time series forecasting?
2. What are the strengths and weaknesses of the proposed approach, particularly in its use of tensor networks?
3. Are there any concerns or questions regarding the paper's explanations, claims, and equations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper considers the problem of forecasting of multivariate time series. The authors propose an architecture for such forecasting that incorporates several layers, including a residual tensor network layer. The idea is to tensorize (via outer products/Kronecker products) the features passed on from the encoder layer and then handle these very large tensorized features with a tensor network. The point of the tensorization is to better incorporate the effect of combinations of variables from different time steps. Experiments are done on four benchmark datasets.
Review
Strengths
S1. The problem itself is interesting and it's easy to see why this is a relevant problem.
S2. Tensorizing features and then using tensor networks to work with those features is an interesting approach (although it's unclear how this is done; see below).
Weaknesses
W1. It is unclear how the tensor network is used. Is it used to approximate the tensor
T
in Eq. (6)? Throughout the paper up until about half way through Section 4.2, this is what I thought, but then after reading through the second half of Section 4.2, it seems like this is not the case, and that the only thing you do is to use a single tensor
W
in Eq. (10). If it’s the latter, then it’s not clear why you e.g. include Claim 1.
W2. It is not clear what the point of Claim 1 is. It is well known that tensor trains (and in fact, any tensor networks with tree shaped structure) can exactly decompose arbitrary tensors. See Theorem 8.8 in Ye and Lim [2019, https://arxiv.org/abs/1801.02662]. This also means that the first contribution in the list of contributions at the end of Section 1 really isn’t a new contribution.
W3. This also relates to the points above. If you do use tensor train to decompose
T
in Eq. (6) (which is what e.g. Claim 1 suggests), then what is the point of doing that?
T
is already in decomposed format since it’s a rank-1 CP tensor. Similarly, in Definition 1, the set of all tensors of the form in Eq. (2) does not make up the whole space
R
D
T
(which the definition seems to suggest); the former is the set of all rank-1 tensors which is a strict subset of
R
D
T
.
Minor other comments
C1. In Sec. 1, 2nd paragraph, you say that “In 2011, the ‘Switzerland’ variable changed significantly earlier than the other,” but it looks to me like it changed later?
C2. Minor things throughout the paper, like letters at the start of sentences not being capitalized, periods in places where there shouldn’t be, incorrectly spelled words, and other similar issues.
C3. In Sec. 1, 4th paragraph, you say that “this is still incomplete for the variable spaces shown in Figure 1 (c).” It's not clear what you mean here. Are you saying that the NN shown in the figure is insufficient for a multivariate time series? Or is the figure meant to illustrate what a multivariate time series looks like?
C4. Text in figures is way too small in many places.
C5. Below Eq. (3), you say that
W
C
is of size
k
D
×
d
. Should this be
k
×
d
?
C6. What exactly does the Tril function in Eq. (4) do?
C7. In Eq. (7), you first write the core tensors
W
(
1
)
,
…
,
W
(
T
)
as if they were separate tensors, but then in the next line it seems like you assume that all the core tensors are the same? This is confusing.
C8. Also, in Eq. (7), are the tensor modes corresponding to the labels
h
0
and
h
T
singleton dimensions? If not, the decomposition is not a tensor train decomposition. In the tensor train decomposition, the cores at the end are in fact matrices.
C9. Below Eq. (7): Should it be
x
t
∈
{1,…,
d
} and
y
t
∈
{1,…,
r
t
} rather than what you have now? Also, it’s not accurate to call the summation index
h
t
rank---it is just a variable which goes from 1 to
r
t
, where
r
t
is the rank.
C10. Above Claim 1: You say that “it is worth noting that
A
and
T
are equal when the error is allowed.” What does that mean?
C11. In Claim 1: What do you mean by “their complexity are equal to
O
(
d
T
)
”. Do you mean storage complexity?
C12. Above Eq. (9): What do you mean by L2 regularization of a vector? Do you mean that you normalize the vector to have unit length?
C13. In Eq. (10),
h
t
appears on both sides of the equation. Should it be
h
t
−
1
in the bottom line?
C14. In Eq. (15),
M
W
M
is of size
d
×
1
, but
W
O
is of size
d
×
D
, so it doesn’t match.
M
W
M
needs to be transposed for it to match with the size of
W
O
.
C15. In Eq. (16): The left hand side is
Y
s
′
=
Y
T
+
h
′
. But where does the h appear in the right hand side?
C16. How were the coefficients in Table 2 chosen? |
ICLR | Title
Gene finding revisited: improved robustness through structured decoding from learning embeddings
Abstract
Gene finding is the task of identifying the locations of coding sequences within the vast amount of genetic code contained in the genome. With an ever increasing quantity of raw genome sequences, gene finding is an important avenue towards understanding the genetic information of (novel) organisms, as well as learning shared patterns across evolutionarily diverse species. The current state of the art are graphical models usually trained per organism and requiring manually curated data sets. However, these models lack the flexibility to incorporate deep learning representation learning techniques that have in recent years been transformative in the analysis of protein sequences, and which could potentially help gene finders exploit the growing number of the sequenced genomes to expand performance across multiple organisms. Here, we propose a novel approach, combining learned embeddings of raw genetic sequences with exact decoding using a latent conditional random field. We show that the model achieves performance matching the current state of the art, while increasing training robustness, and removing the need for manually fitted length distributions. As language models for DNA improve, this paves the way for more performant cross-organism gene-finders.
1 INTRODUCTION
Genes are patches of deoxyribonucleic acid (DNA) in our genome that encode functional and structural products of the cell. The central dogma of biology states that these segments are transcribed into ribonucleic acid (RNA) and in many cases translated into the amino acid sequences of proteins. In recent years, the machine learning community has dedicated considerable attention specifically to studying proteins, and solving various protein-related tasks, with the aid of deep learning. This focus has resulted in impressive advances within the field (Detlefsen et al., 2022; Jumper et al., 2021; Rao et al., 2020; Shin et al., 2021). Less attention has been paid to the DNA sequences themselves, despite the fact that finding genes in a genome remains an important open problem. Due to technological advances, the rate by which genomes are sequenced is rising much more rapidly than we can reliably annotate genes experimentally, and without proper gene annotations, we lack information about the proteins encoded in these sequences. In particular, for taxonomies that are sparsely characterised or highly diverse, such as fungi, this hinders us from extracting essential information from newly sequenced genomes.
The wealth of available genomic data suggests that this is an area ripe for a high-capacity deep learning approaches that automatically detect the most salient features in the data. This potential has in recent years been clearly demonstrated in the realm of proteins where deep learning has proven extremely effective in both the supervised setting (Alley et al., 2019; Hsu et al., 2022; Jumper et al., 2021) and in the unsupervised setting (Rao et al., 2021; 2020; Vig et al., 2021). In particular, embeddings obtained in transformer based protein language models have pushed the boundaries for performance in many downstream sequence-based prediction tasks. The advantages of such models are two-fold: 1) they enable pre-training in cases where unlabelled data far outweighs labelled data and 2) they have demonstrated the ability to learn across diverse proteins. We are currently witnessing an emerging interest in language models for DNA as well, but progress in this area has proven more difficult than for its protein counterpart. In a completely unsupervised setting the amount of DNA data is orders of magnitude larger than that of proteins, and the signals are
correspondingly sparse. For instance, Eukaryotic genomes consist of millions to billions of DNA base pairs but only a small percentage are genes and an even smaller percentage codes for protein (approx. 1% in the human genome). Genes also have an intricate structure which places demands on a high degrees of consistency between output labels predicted at different positions in the sequence. In particular, genes contain both coding segments (called CDS or exon) and intron segments. Only the coding segments are retained in the final protein, while the introns are removed. The process in which introns are removed is called splicing, which occurs after the gene is transcribed from DNA to RNA. After introns are spliced out, the RNA is translated to amino acid sequences (the protein product). Each amino acid is encoded by a codon (a triplet of DNA nucleotides) in the RNA sequence. Due to this codon structure, the annotation of the coding sequences in the gene must be extremely accurate, as shifting the frame of the codon with just one will result in a nonsensical protein. Gene prediction thus consists of correctly annotating the boundaries of a gene as well as the intron splice sites (donor/acceptor sites), a task challenged both by the imbalanced data but also by the extreme accuracy needed.
The current state-of-the-art in gene-finding relies on Hidden Markov Models (HMMs) and exact decoding (e.g. Viterbi) to ensure the required consistency among predictions at different output positions. To make these methods work well in practice, considerable effort has been put into hand-coded length distributions inside the HMM transition matrix, and a careful curation of the training data to ensure that the length statistics are representative for the genome in question. The resulting HMMs have dominated the field for more than two decades. However, their performance still leaves a lot to be desired, they are generally difficult to train, and have no mechanism for incorporating learned embeddings and context dependent learned length distributions. These models can be improved by incorporating them with external hints and constructing pipelines (Hoff et al., 2016) but they are not compatible with deep learning advents that have revolutionised adjacent fields. The goal with this paper is to develop a new approach that is compatible with contemporary deep learning practices, can be trained without manual feature engineering and careful data curation, while maintaining the capability for exact decoding.
Here we present an approach, which we term GeneDecoder, to gene prediction that is able to both incorporate prior knowledge of gene structure in the form of a latent graphs in a Conditional Random Fields as well as embeddings learned directly from the DNA sequence. This approach proves easy to train naively while still achieving high performance across a range of diverse genomes. We highlight that the resulting model is very flexible and open to improvement either by including external hints or by improvement of the current DNA sequence embeddings. We benchmark against three other supervised algorithms (Augustus, Snap, GlimmerHMM) and find that the performance of our model competes with that of the state-of-the-art (Scalzitti et al., 2020) without a strong effort put into model-selection. However, as pre-trained DNA models start to emerge and improve we expect that the full potential of this approach will be realised.
2 RELATED WORK
Current gene prediction algorithms are Hidden Markov Models (HMM) or Generalized HMMs. These include Augustus (Stanke & Waack, 2003) , Snap (Korf, 2004)., GlimmerHMM (Majoros et al., 2004) and Genemark.hmm (Borodovsky & Lomsadze, 2011). All these models are trained fully supervised and on a per-organism basis. Genemark also exists in a version that is similarly trained on one organism but in an iterative self-supervised manner (Ter-Hovhannisyan et al., 2008). In practice, gene prediction is often done through meta-prediction pipelines such as Braker (Hoff et al., 2016), Maker2 (Holt & Yandell, 2011) and Snowyowl (Reid et al., 2014), which typically combine preexisting HMMs with external hints (e.g. protein or RNA alignments) and/or iterated training. Focusing on the individual gene predictors, Augustus is currently the tool of choice in the supervised setting, according to a recent benchmark study (Scalzitti et al., 2020). It is an HMM with explicit length distributions for introns and CDS states. The model also includes multiple types of intron and exon states emitting either fixed-length sequences or sequences from a length distribution given by a specific choice of self-transition between states. This intron model has been shown to be key to its performance; without it Augustus was found to be incapable of modelling length distributions in introns correctly (Stanke & Waack, 2003). Models like Snap and GlimmerHMM follow similar ideas, but differ in their transition structure. In particular, GlimmerHMM includes a splice site model from Genesplicer (Pertea et al., 2001). These HMM-gene predictors are known to be
sensitive to the quality of the input data set. For instance, the Augustus training guidelines specifies requirements for non-redundancy, non-overlapping genes and restrictions to single transcripts per genes. These considerations are not merely theoretical. In the preparation of the baseline results for this paper, we have attempted to retrain several of these models on our own data set splits, but were unable to obtain competitive results. The Augustus guidelines also report diminishing returns for data sets with more than 1000 genes, and highlights the importance of quality over quantity in the training set in this scenario. These recommendations are sensible in a low-data regime, but we might wish to relax them as we obtain more genomic data. It would be convenient if we could train gene predictor models on lower quality data sets, including multiple, potentially conflicting transcripts arising for instance from alternative splicing. The goal of this paper is to explore modelling strategies towards this goal.
Eventually, we would also like to train cross-genome gene predictors, for instance using featurizations obtained from pre-trained language models. Following the success of pre-trained models on protein sequences, DNA language modelling is starting to emerge as a field. Currently, two such models are available: DNABert (Ji et al., 2021) and GPN (Benegas et al., 2022). Both are trained on a single organism, each across a whole genome with a sequence window of 512 nucleotides. While DNABert is a standard transformer based model for which long sequences can be computationally prohibitive, GPN (Benegas et al., 2022) is based on dilated convolutions where it could be possible to extend the sequence length. In this work we perform preliminary explorations of using GPN embeddings for gene prediction.
3 METHODS
3.1 MODEL
Latent conditional random fields Genomic datasets are highly unbalanced, containing only a very small percentage of the donor and acceptor labels. Since identification of these labels is paramount for correct annotation, the model must be designed to deal robustly with this. Furthermore, we wish to encode prior knowledge of gene structure in order to produce consistent predictions. Finally, the model must be flexible in the type of input used, so that context dependent embeddings can be learned directly from the sequence. To fulfil these requirements, we choose a Linear chain Conditional Random Field (CRF) (Lafferty et al., 2001) model. The CRF can, like the HMM, encode gene structure in its graph, but because it models the conditional probability of the output labels, rather than the joint distribution of input and output, it is well suited for integration with a embedding model. In doing so, we can hope to learn a richer representation of the input, while still training the model end-to-end.
The input to the embedding model are one-hot encoded values X = {A,C,G, T,N}, where N is unknown nucleotides. As output labels, we have Exon (E), Intron (I), Donor (D), Acceptor (A) and Non-coding (NC). The CRF learns transition potentials (i.e. unnormalized transition probabilities) between these states, but we can manually set particular of these potentials to − inf to allow only biologically meaningful transitions, thus following a similar strategy as that employed in classic HMM gene predictors. Two important restrictions are 1) directionality, ensuring that forward and reverse coded genes are not intermixed, and 2) codon-awareness, which ensures that the coding
regions of genes are always a multiple of three. To encode this in the transition potentials, we expand the label set (Table 1). Note that in addition to the exon states, it is necessary to expand also the intron, acceptor and donor states such that codon position can be tracked through the intron regions.
The likelihood for a CRF takes the form:
P (y|x, θ) = 1 Z exp N∑ n=1 ( Tyn−1,yn + θ(xn) ) (1)
T is the learned transition weights of the observed labels and θ the learned input embedding. The partition function Z is the summed score over all possible label sequences:
Z = ∑ y′∈Y exp ∑ n= 1N ( Ty′n−1,y′n + θ(xn) ) (2)
In the standard CRF, the states appearing in the transition table will be observed during training. In our case, we do not wish to specify the direction and codon-index apriori. HMM-based genefinders solve this problem by formulating the Markov process in a latent discrete space, and coupling it to the output states through a learned emission probability table. A similar solution is not easily tractable for CRFs. However, as shown by Morency et al, it becomes feasible if we introduce a many-to-one deterministic mapping between latent and observed states (Morency et al., 2007). This is a natural assumption to make in our case, as illustrated by the individual rows in Table 1. The corresponding Latent CRF (L-CRF), includes a summation over the set of latent states emitting a given observed state.
P (y|x, θ) = 1 Z exp N∑ n=1 ∑ h′∈Hyn−1 ∑ h∈Hyn (Th′,h + Eh,yn) + θ(xn) (3) where Hyn denotes the set of hidden states emitting the value taken by yn. The learned transition matrix T is now over the hidden states and the emission matrix E denotes the emission potentials from hidden to observed states. Similarly, the partition function now also sums over the set of hidden
states that emits y:
Z = ∑ y′∈Y exp N∑ n=1 ∑ h′∈Hyn−1 ∑ h∈Hyn (Th′,h + Eh,yn) + θ(xn) (4) In our case, the Hyn matrix is given by the rows in Table 1. The non-blocked entries in the transition matrix T are learned (see examples of an adjacency matrices in Figure A.2). We assume that gene structure is similar on the forward and reverse strands and therefore share the learned transition potentials between the two directions. Figure 1 shows a full overview of our model. In the remainder of the paper, we will refer to this model decoding architecture as CRF and L-CRF interchangably.
Feature Model Embeddings Although the embedding function θ in a CRF was traditionally based on manually engineered features, modern autograd frameworks make it possible to learn such functions using e.g. neural networks, and train both the featurization and decoder models end-to-end. This allows us to use a complex, context dependent mapping from the one-hot encoded input sequence as the conditioning input to our CRF. As a proof of concept, we here use a simple combination of a (dilated) CNN and an LSTM. The former ensures that we see beyond single nucleotides, avoiding the need to manually encode at the codon level, and the latter models the remaining sequential signal. We disregarded more expressive architectures such as transformers because we in the purely-supervised setting are in a limited data regime. However, we anticipate that pre-trained language models for DNA will eventually replace the need for learning new feature embeddings per organism (actually, one could meaningfully define it as a success criterion for DNA language models). We briefly explore this potential in the result section below.
Training The model were trained using AdamW on the on the Negative Log Likelihood, using a batchsize of 64. We used early stopping based on the validation set likelihood to avoid overfitting.
Inference Viterbi decoding is used to find the best label sequence under the model. Alternatively sampling can be performed according to the label probabilities, either across the entire sequence or for segments. Posterior marginal probabilities per position are calculated with the forward-backward algorithm and can be used to assess model certainty for specific positions.
Choice of performance metrics Accurately predicting the direction, as well as the donor and acceptor sites (i.e. start and end of introns), is highly important for predicting the correct protein. For this reason Matthews Correlation Coefficient as well as Macro weighted F1 score is reported a the label set containing exon/CDS, intron, donor and acceptor labels.
4 EXPERIMENTS
4.1 DATA
Sequence and general feature format (gff3) annotation files are obtained for each genome as follows: Human - Gencode, Aspergillus Nidulans and Saccharomyces Cerevisiae - EnsembleFungi, Arabidopsis Thaliana - EnsemblPlants, C. Elegans Wormbase (Frankish et al., 2021; Cunningham et al., 2022; Howe et al., 2017).
No augmentation or inspection of the data was performed, with the exception of ensuring the total length of CDS segments in each gene was divisible by 3, ensuring that valid protein product was possible. Gene annotations were extracted with a random flanks of a length between 1-2000 nucleotides. These relatively short flank lengths were chosen to decrease training time. The maximum allowed gene length was capped according to the gene length distribution in the species to ensure a Representative data set but simultaneously exclude extreme outliers in gene length that would heavily increase compute time. The data set was homology partitioned at a threshold of 80% protein sequence similarity into train, validation and test sets. The test sets were used for benchmarking. The length of the flank can influence the performance depending on the inference task. For some ablation experiments simpler feature models have been chosen in order to reduce compute, these results may therefore reflect only a relative performance rather than the full potential.
4.2 BENCHMARK ALGORITHMS
We benchmark our method against three other widely used gene prediction software (Augustus, GeneMark.hmm, Snap and GlimmerHMM). Given the sensitivity of these methods to the training sets, pre-trained parameter set for each species was used where available. This means we cannot rule out that sequences in our test set are part of the training set of the baseline models. Our reported results are thus a conservative estimate of our relative performance. There exists several versions of the GeneMark algorithm depending on the desired user case (Borodovsky & Lomsadze, 2011; Besemer et al., 2001; Lukashin & Borodovsky, 1998). We chose GeneMark.hmm as this is a supervised pre-trained per species algorithm, which matches the scope here.
4.3 BENCHMARKING ACROSS EVOLUTIONARILY DIVERSE SPECIES
To demonstrate the performance of our approach we benchmark against four widely used programs on five phylogenetically diverse well-studied species typically used in benchmarking (Table 2). Our model, GeneDecoder, was trained separately on genes from each organism, which is the standard for such gene-finders (Stanke et al., 2006; Borodovsky & Lomsadze, 2011). For the other models, pretrained versions of the programs for the given species, were used in order to not reduce performance by re-training on a data set that was insufficiently curated for their model.
We find that only Augustus achieves similar performance to our model, with Snap performing markedly lower across all species we it tested on. GlimmerHMM and GeneMark.hmm shows better performance but it still has significantly lower predictive power compared to Augustus. This further cements previous findings that Augustus achieves state of the art performance (Scalzitti et al., 2020).
Our model, GeneDecoder competes with or outperforms Augustus, which is a state-of-the-art gene prediction software, across the benchmark, with the exception of the lower F1 score for S. Cerevisiae. The low F1 score for S. Cerevisiae across all algorithms is due to the low number of introncontaining genes. Less than 1% genes contain introns in the data set for this organism. Since the F1 score is highly influenced by performance on underrepresented label categories this reduced performance comes from poor prediction of introns and donor/acceptor sites. It was not possible to obtain Augustus’ training data set for this species, but according to the official training instructions of the software, data sets must be curated to contain a high proportion of intron-containing genes. This would explain discrepancy in the F1 score for Augustus and GeneDecoder, however it also highlights that our model is not highly influenced by the quality of the data set. We expect that the perfomance of GeneDecoder would improve with a curated dataset or oversampling of intron containing genes.
We further compare Augustus with our model by training and testing on the original Augustus data sets for Human (Stanke, 2004). The training data set is in a low data regime, consisting of 1284 training genes, which is not ideal for deep learning settings. Furthermore, the Augustus data set is strictly homology reduced, allowing no more than 80% sequence similarity at the protein level. Nevertheless, our model matches the performance of Augustus on its own data set (see Figure A.5). The difference in predictive performance on non coding labels are likely due to a manually set selftransition probability of 0.99 for non coding labels in the Augustus model. This improves Augustus’ performance on non coding labels but might come at the expense of predicting other labels. While improving Augustus performance requires the use of external hints chosen by the user, our model may be improved by learning better embeddings.
4.4 LEARNED EMBEDDINGS IMPROVE PREDICTIONS
To explore the performance contribution of different aspects of our model we performed ablations, where parts of the model were excluded or simplified part of it (Table 3). Note that these models were trained on a smaller set of the A. thaliana data set.
First, to test the capabilities of the CRF alone, a single linear layer is implemented to map each position from the one-hot encoded DNA sequence to the hidden states. Two graphs of different complexity were used for this experiment (see Table 1, Figure A.2 and A.2 for graphs). While both Linear-CRFs exhibit very poor predictive power, the more complex codon graph, which encodes knowledge of codon structure, clearly performs better that the simple graph, illustrating the need
for a higher graph complexity to improve modelling performance. This is consistent with similar observations made for HMM-based gene predictors Stanke & Waack (2003).
By introducing a feature model such as an LSTM, performance is greatly enhanced. While an LSTM trained alone also displays decent performance it is evident from Figure A.5 that without the CRF, splice sites are often not labelled correctly which is vital for producing consistent predictions.
Interestingly training a linear layer on fixed embeddings from the GPN language model (Benegas et al., 2022) achieves high performance (Figure A.5). This indicates that the masked pre-training of the GPN model captures DNA sequence patterns relevant for gene prediction, despite only having trained on a single organism and primarily on non-coding DNA. The emergent field of DNA language modelling could have a significant effect on downstream DNA modelling tasks, as they have had in the protein field. These ablation studies reveal that with a good feature model, there is less need for as complex a graph structure. The feature model can instead learn complex dependencies directly from the sequence.
The best performing model, which we refer to as GeneDecoder, uses a feature model with a combination of dilated convolutions as well as a bidirectional LSTM, which we al. We hypothesise that the increased performance over a LSTM alone is due to better long range information provided by the dilated convolutions. The details of the architecture are available in Appendix A.4).
4.5 LEARNED EMBEDDINGS ACCURATELY MODEL LENGTH DISTRIBUTIONS
Accurately capturing length distributions of introns and exons is essential for a well performing gene predictor. Along with explicitly modelling exon length distributions, Stanke & Waack (2003) had to introduce an intron sub-model with states modelling both explicit length distributions as well as fixed length emissions, to meaningfully capture intron lengths.
We find that our model readily learns these signals directly from data (Figure (4.5)). The latent CRF without learned embedding is not sufficient to reproduce the true lengths resulting in a flattened distribution similar to Stanke & Waack (2003). However, with the learned embeddings, GeneDecoder captures the length distributions faithfully, especially for exons. It is very promising that the pretrained embeddings GPN model is also able to capture the length distributions accurately. The GPN embeddings were not fine-tuned in this case. This further cements that incorporating deep learning, and particularly unsupervised pre-trained embeddings, is a promising direction for GeneDecoder and
genefinders in general. Especially when expanding the model to perform cross-species prediction. However, DNA language modelling is still a nascent field and there are currently many unanswered questions on how to transfer the masked modelling concept to DNA sequences as compared to proteins. These concern both the sparsity of information, the longer sequences and the much longer range signal that likely needs to be captured for high performance in many downstream tasks. The GPN model is trained on DNA segments of 512 nucleotides which might exclude capturing many longer range signals and affect performance.
4.6 LATENT STRUCTURE CAN INFER UNOBSERVED LABELS
To test the capacity of the latent graph of our model we explore whether it can learn unobserved states directly from the sequence. One such example is inferring the directionality of genes. Genes can be encoded on both strands of the double-helical DNA but usually only one strand is sequenced. Genes that lie in the ”reading” direction of sequenced strand are designated as forward and genes that would be read on the opposite strand as reverse. We train our model on a directionless set of observed labels (see simple-labels-DA in Appendix A.1). Since the the training data does not provide any information about directionality, the model must learn this directly from the sequence to be able to distinguish between directions.
Figure 4.5 4 shows the comparison between our model trained on the directionless label set as well as a set including directional labels (see simple-direction-DA in Appendix A.1). Although there is a reduction in performance between the two training regiments, the model is still able to learn and infer direction. Surprisingly the reduced performance does not originate from confusion between forward and reverse labels but rather due to the model’s propensity to over predict non-coding labels (see A.5).
5 CONCLUSION
Here we present a novel approach, GeneDecoder, to gene prediction combining the advantages of graphical models with the flexibility and potential of deep learned embeddings.
We find that the Latent CRF gives us not only the ability to preserve the graph-encoding of prior knowledge, as in the state-of-the-art HMMs gene predictors, but also offers an advantage by allowing us to learn embeddings directly from the sequence. This advantage is not only visible in the performance, but also in the low effort required to construct data sets and train the model. The greatest advantage, however, might well be the great flexibility of the learned embeddings and the potential to improve them as the DNA modelling field advances.
Even now this seems evident from the preliminary studies performed here using embeddings from a DNA language model trained only on a single species, with very short input sequences. We expect that as the field progresses language models more specifically tailored to DNA will emerge, modelling longer range interactions and training across multiple species. Such results have already been demonstrated by the success of the Enformer model (Avsec et al., 2021), which demonstrated the importance of long range information as well as the ability to learn downstream tasks that were not trained on.
One potential limitation of our model is that the amount of non-coding DNA (in our cases the flanking regions) in the training data set can affect the performance of the model on data with significantly more non-coding DNA. This issue can be resolved in the training process by scaling the training to larger data sets containing longer non-coding regions. As this presents a technical challenge, along with a time and resource challenge, rather than a scientific one we leave it for future work. Our model provides the capacity to include external hints in the inference process. Either via sampling high probability pathways around fixed labels but also via modification of the hidden state probabilities. The latter method can even be used as a strategy during training, highlighting the flexibility of out model. Lastly, model per position certainty is directly provided model in the form of posterior marginal probabilities. We leave it for future exploration to rigorously benchmark this.
We also expect that language modelling will alleviate issues like this by modelling far better and more informative embeddings of the raw sequence. But we leave this for future work. In this regard we view the model presented here as an initial iteration that can only improve as DNA modelling does. We anticipate that future iterations of GeneDecoder will incorporate some form of self-supervised pre-training across various species which could vastly improve not only prediction tasks in species with rich gene annotation data but also in novel or understudied organisms.
6 CODE AVAILABILITY
Code will be available on github upon publication.
A APPENDIX
A.1 LABEL SETS
Generally five types of labels are used:
E : Exon/CDS (i.e. the coding part of a sequence)
I : Intron
D : Donor site (one of the two splice sites)
A : Acceptor site (one of the two splice sites)
NC : Non-coding or intergenic regions of the genome.
Subscript F and R (e.g. EF and EF ) are used to denote the forward and reverse direction of the gene. Numbered subscripts are used to denote codon structure (i.e. E1,F is the first exon nucleotide in a codon in the forward direction).
Simple-DA:
The simplest set of states used. Y = { E,D, I,A,NC } Simple-direction-DA:
Expands the set of states to include information about the direction/strand of the gene. Y = { EF , DF , IF , AF , ER, ER, IR, AR, NC } Codon-direction-DA:
Further expands the set states to include not only direction but also codon structure. Y = { E1,F , E2,F , E3,F , D1,F , D2,F , D3,F , I1,F , I2,F , I3,F , A1,F , A2,F , A3,F ,
E1,R, E2,R, E3,R, D1,R, D2,R, D3,R, I1,R, I2,R, I3,R, A1,R, A2,R, A3,R, NC }
A.2 GRAPHS
Graphs are represented as adjacency matrices and are named according to the label sets described in Appendix A.1. A grey cell signifies a forbidden transition while the weights of the green cells are learned by model.
A.3 EMISSIONS
A grey cell signifies a forbidden emission while the weights of the green cells are set to 1 (i.e. an allowed emissions).
A.4 GENEDECODER FEATURE MODEL ARCHITECHTURE
Feature Model
Where |X| is the number of channels in the input, and |H| is the size of the set of hidden states. In most cases X = {A,C,G, T,N}.
DilatedCNN(
Layer 1 : Conv1d(in channels = |X|, out channels = 50, kernel size = (9,), stride = 1, dilation = 1), ReLU
Layer 2 : Conv1d(in channels = 50, out channels = 50, kernel size = (9,), stride = 1, dilation = 2), ReLU
Layer 3 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 4), ReLU
Layer 4 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 8), ReLU
Layer 5 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 16), ReLU
Layer 6 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 32), ReLU
)
LSTM(
LSTM layer : lstm(50, hidden layers = 100, bidirectional = True)
Output layer : linear(input size = 200, output size = |H|, bias = True, dropout(0.2)), ReLU
)
A.5 SUPPLEMENTARY FIGURES | 1. What is the focus and contribution of the paper on gene annotation?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of methodology and experiment setup?
3. Do you have any concerns regarding the model used by GeneDecoder?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions regarding the comparison with existing methods and the use of benchmarks? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors tackle the problem of annotating genes in newly-sequenced genomes. They develop a model called GeneDecoder which uses a combination of a CRF, LSTM and dilated convolutional layers. The authors show that the model relearns several properties of genes, including directionality and length distribution. The model achieves similar predictive performance to existing method Augustus.
Strengths And Weaknesses
Overall, the problem of annotating genes is important, the method is reasonable and the manuscript is understandable. The methodological novelty is low, as the method is a combination of standard CRF, LSTM and dilated CNN models. The experimental setup is flawed and the results are poor.
I couldn't tell what the model used by GeneDecoder is. It involves a neural network, whose architecture is described in A.4. However it also uses a Latent CRF. I think the NN outputs a representation which forms the input to the CRF? This isn't actually stated.
Benchmarking gene prediction tools is challenging because many of the annotations within gene databases (e.g. GENCODE) are derived from computational predictors. Thus it can be hard to tell whether a predictor is good at discovering real biology or simply recapitulating the errors made by previous predictors. There exists a benchmark for this task, G3PO (Scalzitti et al 2020), cited by the authors. Unfortunately, the authors did not use this benchmark and instead used an ad-hoc strategy with many issues (see below).
The authors compared to pretrained versions of existing models, so differences in performance could result from differing training sets. When the authors compared against Augustus using a training set similar to Augustus's, results were similar or worse.
The authors trained and tested on genes of the same species. This not a proper simulation of the target application, in which a new species is sequenced and its genome must be annotated from scratch.
In splitting genes into train and test sets, the authors ensure that all isoforms of the same gene are placed in the same set. However, since many genes overlap, the same sequences likely appear in both train and test sets. This pitfall is described in detail in the following paper. A better strategy would be to split train and test by chromosome (or by species; see previous note). https://pubmed.ncbi.nlm.nih.gov/34837041/
Clarity, Quality, Novelty And Reproducibility
See above. |
ICLR | Title
Gene finding revisited: improved robustness through structured decoding from learning embeddings
Abstract
Gene finding is the task of identifying the locations of coding sequences within the vast amount of genetic code contained in the genome. With an ever increasing quantity of raw genome sequences, gene finding is an important avenue towards understanding the genetic information of (novel) organisms, as well as learning shared patterns across evolutionarily diverse species. The current state of the art are graphical models usually trained per organism and requiring manually curated data sets. However, these models lack the flexibility to incorporate deep learning representation learning techniques that have in recent years been transformative in the analysis of protein sequences, and which could potentially help gene finders exploit the growing number of the sequenced genomes to expand performance across multiple organisms. Here, we propose a novel approach, combining learned embeddings of raw genetic sequences with exact decoding using a latent conditional random field. We show that the model achieves performance matching the current state of the art, while increasing training robustness, and removing the need for manually fitted length distributions. As language models for DNA improve, this paves the way for more performant cross-organism gene-finders.
1 INTRODUCTION
Genes are patches of deoxyribonucleic acid (DNA) in our genome that encode functional and structural products of the cell. The central dogma of biology states that these segments are transcribed into ribonucleic acid (RNA) and in many cases translated into the amino acid sequences of proteins. In recent years, the machine learning community has dedicated considerable attention specifically to studying proteins, and solving various protein-related tasks, with the aid of deep learning. This focus has resulted in impressive advances within the field (Detlefsen et al., 2022; Jumper et al., 2021; Rao et al., 2020; Shin et al., 2021). Less attention has been paid to the DNA sequences themselves, despite the fact that finding genes in a genome remains an important open problem. Due to technological advances, the rate by which genomes are sequenced is rising much more rapidly than we can reliably annotate genes experimentally, and without proper gene annotations, we lack information about the proteins encoded in these sequences. In particular, for taxonomies that are sparsely characterised or highly diverse, such as fungi, this hinders us from extracting essential information from newly sequenced genomes.
The wealth of available genomic data suggests that this is an area ripe for a high-capacity deep learning approaches that automatically detect the most salient features in the data. This potential has in recent years been clearly demonstrated in the realm of proteins where deep learning has proven extremely effective in both the supervised setting (Alley et al., 2019; Hsu et al., 2022; Jumper et al., 2021) and in the unsupervised setting (Rao et al., 2021; 2020; Vig et al., 2021). In particular, embeddings obtained in transformer based protein language models have pushed the boundaries for performance in many downstream sequence-based prediction tasks. The advantages of such models are two-fold: 1) they enable pre-training in cases where unlabelled data far outweighs labelled data and 2) they have demonstrated the ability to learn across diverse proteins. We are currently witnessing an emerging interest in language models for DNA as well, but progress in this area has proven more difficult than for its protein counterpart. In a completely unsupervised setting the amount of DNA data is orders of magnitude larger than that of proteins, and the signals are
correspondingly sparse. For instance, Eukaryotic genomes consist of millions to billions of DNA base pairs but only a small percentage are genes and an even smaller percentage codes for protein (approx. 1% in the human genome). Genes also have an intricate structure which places demands on a high degrees of consistency between output labels predicted at different positions in the sequence. In particular, genes contain both coding segments (called CDS or exon) and intron segments. Only the coding segments are retained in the final protein, while the introns are removed. The process in which introns are removed is called splicing, which occurs after the gene is transcribed from DNA to RNA. After introns are spliced out, the RNA is translated to amino acid sequences (the protein product). Each amino acid is encoded by a codon (a triplet of DNA nucleotides) in the RNA sequence. Due to this codon structure, the annotation of the coding sequences in the gene must be extremely accurate, as shifting the frame of the codon with just one will result in a nonsensical protein. Gene prediction thus consists of correctly annotating the boundaries of a gene as well as the intron splice sites (donor/acceptor sites), a task challenged both by the imbalanced data but also by the extreme accuracy needed.
The current state-of-the-art in gene-finding relies on Hidden Markov Models (HMMs) and exact decoding (e.g. Viterbi) to ensure the required consistency among predictions at different output positions. To make these methods work well in practice, considerable effort has been put into hand-coded length distributions inside the HMM transition matrix, and a careful curation of the training data to ensure that the length statistics are representative for the genome in question. The resulting HMMs have dominated the field for more than two decades. However, their performance still leaves a lot to be desired, they are generally difficult to train, and have no mechanism for incorporating learned embeddings and context dependent learned length distributions. These models can be improved by incorporating them with external hints and constructing pipelines (Hoff et al., 2016) but they are not compatible with deep learning advents that have revolutionised adjacent fields. The goal with this paper is to develop a new approach that is compatible with contemporary deep learning practices, can be trained without manual feature engineering and careful data curation, while maintaining the capability for exact decoding.
Here we present an approach, which we term GeneDecoder, to gene prediction that is able to both incorporate prior knowledge of gene structure in the form of a latent graphs in a Conditional Random Fields as well as embeddings learned directly from the DNA sequence. This approach proves easy to train naively while still achieving high performance across a range of diverse genomes. We highlight that the resulting model is very flexible and open to improvement either by including external hints or by improvement of the current DNA sequence embeddings. We benchmark against three other supervised algorithms (Augustus, Snap, GlimmerHMM) and find that the performance of our model competes with that of the state-of-the-art (Scalzitti et al., 2020) without a strong effort put into model-selection. However, as pre-trained DNA models start to emerge and improve we expect that the full potential of this approach will be realised.
2 RELATED WORK
Current gene prediction algorithms are Hidden Markov Models (HMM) or Generalized HMMs. These include Augustus (Stanke & Waack, 2003) , Snap (Korf, 2004)., GlimmerHMM (Majoros et al., 2004) and Genemark.hmm (Borodovsky & Lomsadze, 2011). All these models are trained fully supervised and on a per-organism basis. Genemark also exists in a version that is similarly trained on one organism but in an iterative self-supervised manner (Ter-Hovhannisyan et al., 2008). In practice, gene prediction is often done through meta-prediction pipelines such as Braker (Hoff et al., 2016), Maker2 (Holt & Yandell, 2011) and Snowyowl (Reid et al., 2014), which typically combine preexisting HMMs with external hints (e.g. protein or RNA alignments) and/or iterated training. Focusing on the individual gene predictors, Augustus is currently the tool of choice in the supervised setting, according to a recent benchmark study (Scalzitti et al., 2020). It is an HMM with explicit length distributions for introns and CDS states. The model also includes multiple types of intron and exon states emitting either fixed-length sequences or sequences from a length distribution given by a specific choice of self-transition between states. This intron model has been shown to be key to its performance; without it Augustus was found to be incapable of modelling length distributions in introns correctly (Stanke & Waack, 2003). Models like Snap and GlimmerHMM follow similar ideas, but differ in their transition structure. In particular, GlimmerHMM includes a splice site model from Genesplicer (Pertea et al., 2001). These HMM-gene predictors are known to be
sensitive to the quality of the input data set. For instance, the Augustus training guidelines specifies requirements for non-redundancy, non-overlapping genes and restrictions to single transcripts per genes. These considerations are not merely theoretical. In the preparation of the baseline results for this paper, we have attempted to retrain several of these models on our own data set splits, but were unable to obtain competitive results. The Augustus guidelines also report diminishing returns for data sets with more than 1000 genes, and highlights the importance of quality over quantity in the training set in this scenario. These recommendations are sensible in a low-data regime, but we might wish to relax them as we obtain more genomic data. It would be convenient if we could train gene predictor models on lower quality data sets, including multiple, potentially conflicting transcripts arising for instance from alternative splicing. The goal of this paper is to explore modelling strategies towards this goal.
Eventually, we would also like to train cross-genome gene predictors, for instance using featurizations obtained from pre-trained language models. Following the success of pre-trained models on protein sequences, DNA language modelling is starting to emerge as a field. Currently, two such models are available: DNABert (Ji et al., 2021) and GPN (Benegas et al., 2022). Both are trained on a single organism, each across a whole genome with a sequence window of 512 nucleotides. While DNABert is a standard transformer based model for which long sequences can be computationally prohibitive, GPN (Benegas et al., 2022) is based on dilated convolutions where it could be possible to extend the sequence length. In this work we perform preliminary explorations of using GPN embeddings for gene prediction.
3 METHODS
3.1 MODEL
Latent conditional random fields Genomic datasets are highly unbalanced, containing only a very small percentage of the donor and acceptor labels. Since identification of these labels is paramount for correct annotation, the model must be designed to deal robustly with this. Furthermore, we wish to encode prior knowledge of gene structure in order to produce consistent predictions. Finally, the model must be flexible in the type of input used, so that context dependent embeddings can be learned directly from the sequence. To fulfil these requirements, we choose a Linear chain Conditional Random Field (CRF) (Lafferty et al., 2001) model. The CRF can, like the HMM, encode gene structure in its graph, but because it models the conditional probability of the output labels, rather than the joint distribution of input and output, it is well suited for integration with a embedding model. In doing so, we can hope to learn a richer representation of the input, while still training the model end-to-end.
The input to the embedding model are one-hot encoded values X = {A,C,G, T,N}, where N is unknown nucleotides. As output labels, we have Exon (E), Intron (I), Donor (D), Acceptor (A) and Non-coding (NC). The CRF learns transition potentials (i.e. unnormalized transition probabilities) between these states, but we can manually set particular of these potentials to − inf to allow only biologically meaningful transitions, thus following a similar strategy as that employed in classic HMM gene predictors. Two important restrictions are 1) directionality, ensuring that forward and reverse coded genes are not intermixed, and 2) codon-awareness, which ensures that the coding
regions of genes are always a multiple of three. To encode this in the transition potentials, we expand the label set (Table 1). Note that in addition to the exon states, it is necessary to expand also the intron, acceptor and donor states such that codon position can be tracked through the intron regions.
The likelihood for a CRF takes the form:
P (y|x, θ) = 1 Z exp N∑ n=1 ( Tyn−1,yn + θ(xn) ) (1)
T is the learned transition weights of the observed labels and θ the learned input embedding. The partition function Z is the summed score over all possible label sequences:
Z = ∑ y′∈Y exp ∑ n= 1N ( Ty′n−1,y′n + θ(xn) ) (2)
In the standard CRF, the states appearing in the transition table will be observed during training. In our case, we do not wish to specify the direction and codon-index apriori. HMM-based genefinders solve this problem by formulating the Markov process in a latent discrete space, and coupling it to the output states through a learned emission probability table. A similar solution is not easily tractable for CRFs. However, as shown by Morency et al, it becomes feasible if we introduce a many-to-one deterministic mapping between latent and observed states (Morency et al., 2007). This is a natural assumption to make in our case, as illustrated by the individual rows in Table 1. The corresponding Latent CRF (L-CRF), includes a summation over the set of latent states emitting a given observed state.
P (y|x, θ) = 1 Z exp N∑ n=1 ∑ h′∈Hyn−1 ∑ h∈Hyn (Th′,h + Eh,yn) + θ(xn) (3) where Hyn denotes the set of hidden states emitting the value taken by yn. The learned transition matrix T is now over the hidden states and the emission matrix E denotes the emission potentials from hidden to observed states. Similarly, the partition function now also sums over the set of hidden
states that emits y:
Z = ∑ y′∈Y exp N∑ n=1 ∑ h′∈Hyn−1 ∑ h∈Hyn (Th′,h + Eh,yn) + θ(xn) (4) In our case, the Hyn matrix is given by the rows in Table 1. The non-blocked entries in the transition matrix T are learned (see examples of an adjacency matrices in Figure A.2). We assume that gene structure is similar on the forward and reverse strands and therefore share the learned transition potentials between the two directions. Figure 1 shows a full overview of our model. In the remainder of the paper, we will refer to this model decoding architecture as CRF and L-CRF interchangably.
Feature Model Embeddings Although the embedding function θ in a CRF was traditionally based on manually engineered features, modern autograd frameworks make it possible to learn such functions using e.g. neural networks, and train both the featurization and decoder models end-to-end. This allows us to use a complex, context dependent mapping from the one-hot encoded input sequence as the conditioning input to our CRF. As a proof of concept, we here use a simple combination of a (dilated) CNN and an LSTM. The former ensures that we see beyond single nucleotides, avoiding the need to manually encode at the codon level, and the latter models the remaining sequential signal. We disregarded more expressive architectures such as transformers because we in the purely-supervised setting are in a limited data regime. However, we anticipate that pre-trained language models for DNA will eventually replace the need for learning new feature embeddings per organism (actually, one could meaningfully define it as a success criterion for DNA language models). We briefly explore this potential in the result section below.
Training The model were trained using AdamW on the on the Negative Log Likelihood, using a batchsize of 64. We used early stopping based on the validation set likelihood to avoid overfitting.
Inference Viterbi decoding is used to find the best label sequence under the model. Alternatively sampling can be performed according to the label probabilities, either across the entire sequence or for segments. Posterior marginal probabilities per position are calculated with the forward-backward algorithm and can be used to assess model certainty for specific positions.
Choice of performance metrics Accurately predicting the direction, as well as the donor and acceptor sites (i.e. start and end of introns), is highly important for predicting the correct protein. For this reason Matthews Correlation Coefficient as well as Macro weighted F1 score is reported a the label set containing exon/CDS, intron, donor and acceptor labels.
4 EXPERIMENTS
4.1 DATA
Sequence and general feature format (gff3) annotation files are obtained for each genome as follows: Human - Gencode, Aspergillus Nidulans and Saccharomyces Cerevisiae - EnsembleFungi, Arabidopsis Thaliana - EnsemblPlants, C. Elegans Wormbase (Frankish et al., 2021; Cunningham et al., 2022; Howe et al., 2017).
No augmentation or inspection of the data was performed, with the exception of ensuring the total length of CDS segments in each gene was divisible by 3, ensuring that valid protein product was possible. Gene annotations were extracted with a random flanks of a length between 1-2000 nucleotides. These relatively short flank lengths were chosen to decrease training time. The maximum allowed gene length was capped according to the gene length distribution in the species to ensure a Representative data set but simultaneously exclude extreme outliers in gene length that would heavily increase compute time. The data set was homology partitioned at a threshold of 80% protein sequence similarity into train, validation and test sets. The test sets were used for benchmarking. The length of the flank can influence the performance depending on the inference task. For some ablation experiments simpler feature models have been chosen in order to reduce compute, these results may therefore reflect only a relative performance rather than the full potential.
4.2 BENCHMARK ALGORITHMS
We benchmark our method against three other widely used gene prediction software (Augustus, GeneMark.hmm, Snap and GlimmerHMM). Given the sensitivity of these methods to the training sets, pre-trained parameter set for each species was used where available. This means we cannot rule out that sequences in our test set are part of the training set of the baseline models. Our reported results are thus a conservative estimate of our relative performance. There exists several versions of the GeneMark algorithm depending on the desired user case (Borodovsky & Lomsadze, 2011; Besemer et al., 2001; Lukashin & Borodovsky, 1998). We chose GeneMark.hmm as this is a supervised pre-trained per species algorithm, which matches the scope here.
4.3 BENCHMARKING ACROSS EVOLUTIONARILY DIVERSE SPECIES
To demonstrate the performance of our approach we benchmark against four widely used programs on five phylogenetically diverse well-studied species typically used in benchmarking (Table 2). Our model, GeneDecoder, was trained separately on genes from each organism, which is the standard for such gene-finders (Stanke et al., 2006; Borodovsky & Lomsadze, 2011). For the other models, pretrained versions of the programs for the given species, were used in order to not reduce performance by re-training on a data set that was insufficiently curated for their model.
We find that only Augustus achieves similar performance to our model, with Snap performing markedly lower across all species we it tested on. GlimmerHMM and GeneMark.hmm shows better performance but it still has significantly lower predictive power compared to Augustus. This further cements previous findings that Augustus achieves state of the art performance (Scalzitti et al., 2020).
Our model, GeneDecoder competes with or outperforms Augustus, which is a state-of-the-art gene prediction software, across the benchmark, with the exception of the lower F1 score for S. Cerevisiae. The low F1 score for S. Cerevisiae across all algorithms is due to the low number of introncontaining genes. Less than 1% genes contain introns in the data set for this organism. Since the F1 score is highly influenced by performance on underrepresented label categories this reduced performance comes from poor prediction of introns and donor/acceptor sites. It was not possible to obtain Augustus’ training data set for this species, but according to the official training instructions of the software, data sets must be curated to contain a high proportion of intron-containing genes. This would explain discrepancy in the F1 score for Augustus and GeneDecoder, however it also highlights that our model is not highly influenced by the quality of the data set. We expect that the perfomance of GeneDecoder would improve with a curated dataset or oversampling of intron containing genes.
We further compare Augustus with our model by training and testing on the original Augustus data sets for Human (Stanke, 2004). The training data set is in a low data regime, consisting of 1284 training genes, which is not ideal for deep learning settings. Furthermore, the Augustus data set is strictly homology reduced, allowing no more than 80% sequence similarity at the protein level. Nevertheless, our model matches the performance of Augustus on its own data set (see Figure A.5). The difference in predictive performance on non coding labels are likely due to a manually set selftransition probability of 0.99 for non coding labels in the Augustus model. This improves Augustus’ performance on non coding labels but might come at the expense of predicting other labels. While improving Augustus performance requires the use of external hints chosen by the user, our model may be improved by learning better embeddings.
4.4 LEARNED EMBEDDINGS IMPROVE PREDICTIONS
To explore the performance contribution of different aspects of our model we performed ablations, where parts of the model were excluded or simplified part of it (Table 3). Note that these models were trained on a smaller set of the A. thaliana data set.
First, to test the capabilities of the CRF alone, a single linear layer is implemented to map each position from the one-hot encoded DNA sequence to the hidden states. Two graphs of different complexity were used for this experiment (see Table 1, Figure A.2 and A.2 for graphs). While both Linear-CRFs exhibit very poor predictive power, the more complex codon graph, which encodes knowledge of codon structure, clearly performs better that the simple graph, illustrating the need
for a higher graph complexity to improve modelling performance. This is consistent with similar observations made for HMM-based gene predictors Stanke & Waack (2003).
By introducing a feature model such as an LSTM, performance is greatly enhanced. While an LSTM trained alone also displays decent performance it is evident from Figure A.5 that without the CRF, splice sites are often not labelled correctly which is vital for producing consistent predictions.
Interestingly training a linear layer on fixed embeddings from the GPN language model (Benegas et al., 2022) achieves high performance (Figure A.5). This indicates that the masked pre-training of the GPN model captures DNA sequence patterns relevant for gene prediction, despite only having trained on a single organism and primarily on non-coding DNA. The emergent field of DNA language modelling could have a significant effect on downstream DNA modelling tasks, as they have had in the protein field. These ablation studies reveal that with a good feature model, there is less need for as complex a graph structure. The feature model can instead learn complex dependencies directly from the sequence.
The best performing model, which we refer to as GeneDecoder, uses a feature model with a combination of dilated convolutions as well as a bidirectional LSTM, which we al. We hypothesise that the increased performance over a LSTM alone is due to better long range information provided by the dilated convolutions. The details of the architecture are available in Appendix A.4).
4.5 LEARNED EMBEDDINGS ACCURATELY MODEL LENGTH DISTRIBUTIONS
Accurately capturing length distributions of introns and exons is essential for a well performing gene predictor. Along with explicitly modelling exon length distributions, Stanke & Waack (2003) had to introduce an intron sub-model with states modelling both explicit length distributions as well as fixed length emissions, to meaningfully capture intron lengths.
We find that our model readily learns these signals directly from data (Figure (4.5)). The latent CRF without learned embedding is not sufficient to reproduce the true lengths resulting in a flattened distribution similar to Stanke & Waack (2003). However, with the learned embeddings, GeneDecoder captures the length distributions faithfully, especially for exons. It is very promising that the pretrained embeddings GPN model is also able to capture the length distributions accurately. The GPN embeddings were not fine-tuned in this case. This further cements that incorporating deep learning, and particularly unsupervised pre-trained embeddings, is a promising direction for GeneDecoder and
genefinders in general. Especially when expanding the model to perform cross-species prediction. However, DNA language modelling is still a nascent field and there are currently many unanswered questions on how to transfer the masked modelling concept to DNA sequences as compared to proteins. These concern both the sparsity of information, the longer sequences and the much longer range signal that likely needs to be captured for high performance in many downstream tasks. The GPN model is trained on DNA segments of 512 nucleotides which might exclude capturing many longer range signals and affect performance.
4.6 LATENT STRUCTURE CAN INFER UNOBSERVED LABELS
To test the capacity of the latent graph of our model we explore whether it can learn unobserved states directly from the sequence. One such example is inferring the directionality of genes. Genes can be encoded on both strands of the double-helical DNA but usually only one strand is sequenced. Genes that lie in the ”reading” direction of sequenced strand are designated as forward and genes that would be read on the opposite strand as reverse. We train our model on a directionless set of observed labels (see simple-labels-DA in Appendix A.1). Since the the training data does not provide any information about directionality, the model must learn this directly from the sequence to be able to distinguish between directions.
Figure 4.5 4 shows the comparison between our model trained on the directionless label set as well as a set including directional labels (see simple-direction-DA in Appendix A.1). Although there is a reduction in performance between the two training regiments, the model is still able to learn and infer direction. Surprisingly the reduced performance does not originate from confusion between forward and reverse labels but rather due to the model’s propensity to over predict non-coding labels (see A.5).
5 CONCLUSION
Here we present a novel approach, GeneDecoder, to gene prediction combining the advantages of graphical models with the flexibility and potential of deep learned embeddings.
We find that the Latent CRF gives us not only the ability to preserve the graph-encoding of prior knowledge, as in the state-of-the-art HMMs gene predictors, but also offers an advantage by allowing us to learn embeddings directly from the sequence. This advantage is not only visible in the performance, but also in the low effort required to construct data sets and train the model. The greatest advantage, however, might well be the great flexibility of the learned embeddings and the potential to improve them as the DNA modelling field advances.
Even now this seems evident from the preliminary studies performed here using embeddings from a DNA language model trained only on a single species, with very short input sequences. We expect that as the field progresses language models more specifically tailored to DNA will emerge, modelling longer range interactions and training across multiple species. Such results have already been demonstrated by the success of the Enformer model (Avsec et al., 2021), which demonstrated the importance of long range information as well as the ability to learn downstream tasks that were not trained on.
One potential limitation of our model is that the amount of non-coding DNA (in our cases the flanking regions) in the training data set can affect the performance of the model on data with significantly more non-coding DNA. This issue can be resolved in the training process by scaling the training to larger data sets containing longer non-coding regions. As this presents a technical challenge, along with a time and resource challenge, rather than a scientific one we leave it for future work. Our model provides the capacity to include external hints in the inference process. Either via sampling high probability pathways around fixed labels but also via modification of the hidden state probabilities. The latter method can even be used as a strategy during training, highlighting the flexibility of out model. Lastly, model per position certainty is directly provided model in the form of posterior marginal probabilities. We leave it for future exploration to rigorously benchmark this.
We also expect that language modelling will alleviate issues like this by modelling far better and more informative embeddings of the raw sequence. But we leave this for future work. In this regard we view the model presented here as an initial iteration that can only improve as DNA modelling does. We anticipate that future iterations of GeneDecoder will incorporate some form of self-supervised pre-training across various species which could vastly improve not only prediction tasks in species with rich gene annotation data but also in novel or understudied organisms.
6 CODE AVAILABILITY
Code will be available on github upon publication.
A APPENDIX
A.1 LABEL SETS
Generally five types of labels are used:
E : Exon/CDS (i.e. the coding part of a sequence)
I : Intron
D : Donor site (one of the two splice sites)
A : Acceptor site (one of the two splice sites)
NC : Non-coding or intergenic regions of the genome.
Subscript F and R (e.g. EF and EF ) are used to denote the forward and reverse direction of the gene. Numbered subscripts are used to denote codon structure (i.e. E1,F is the first exon nucleotide in a codon in the forward direction).
Simple-DA:
The simplest set of states used. Y = { E,D, I,A,NC } Simple-direction-DA:
Expands the set of states to include information about the direction/strand of the gene. Y = { EF , DF , IF , AF , ER, ER, IR, AR, NC } Codon-direction-DA:
Further expands the set states to include not only direction but also codon structure. Y = { E1,F , E2,F , E3,F , D1,F , D2,F , D3,F , I1,F , I2,F , I3,F , A1,F , A2,F , A3,F ,
E1,R, E2,R, E3,R, D1,R, D2,R, D3,R, I1,R, I2,R, I3,R, A1,R, A2,R, A3,R, NC }
A.2 GRAPHS
Graphs are represented as adjacency matrices and are named according to the label sets described in Appendix A.1. A grey cell signifies a forbidden transition while the weights of the green cells are learned by model.
A.3 EMISSIONS
A grey cell signifies a forbidden emission while the weights of the green cells are set to 1 (i.e. an allowed emissions).
A.4 GENEDECODER FEATURE MODEL ARCHITECHTURE
Feature Model
Where |X| is the number of channels in the input, and |H| is the size of the set of hidden states. In most cases X = {A,C,G, T,N}.
DilatedCNN(
Layer 1 : Conv1d(in channels = |X|, out channels = 50, kernel size = (9,), stride = 1, dilation = 1), ReLU
Layer 2 : Conv1d(in channels = 50, out channels = 50, kernel size = (9,), stride = 1, dilation = 2), ReLU
Layer 3 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 4), ReLU
Layer 4 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 8), ReLU
Layer 5 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 16), ReLU
Layer 6 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 32), ReLU
)
LSTM(
LSTM layer : lstm(50, hidden layers = 100, bidirectional = True)
Output layer : linear(input size = 200, output size = |H|, bias = True, dropout(0.2)), ReLU
)
A.5 SUPPLEMENTARY FIGURES | 1. What is the focus and contribution of the paper on gene finding?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its architecture and performance?
3. Do you have any concerns about the novelty of the method, especially when compared to other state-of-the-art techniques?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or suggestions regarding the paper that the reviewer would like to address without being too specific? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors have presented a novel method of gene finding - where the task is to label spans of text the functional labels. The presented method uses a neural algorithm that leverages conditional random fields (CRF) over the state of the art HMM methods. They provide a way to fuse pre-trained representations of genomic sequences with their architecture for this task. The novelty is in terms of architecture, incorporating domain information in the form of allowed direction of state changes and experimentations across organisms.
Strengths And Weaknesses
Strengths
State of the art gene finding
Pioneer attempt to combine pre-trained embeddings with gene finding
Incorporation of domain knowledge to improve performance
Exhaustive experimentation
Cross organism experiment verification
Weaknesses
No clarity or intuition on why CNN + LSTM does better than pre-trained models
While an empirical ablation has been done, it seems non-intuitive that the pre-trained embeddings do not help much, considering they are tested on similar downstream non-coding tasks.
Novelty needs to be explained for architecture
For example, how does the combination of embeddings work with neural HMM based formulation work as compared to the state of the art HMM methods?
The efficiency study for these methods is missing. What do we gain by using this architecture (which takes time to compute) against others? This is relevant especially because the values of Augustus’ baseline performance is quite close.
No statistical significance values reported
Clarity, Quality, Novelty And Reproducibility
Clarity - Paper is very easy to read and follow
Quality - Moderate quality work - requires more analysis to justify the findings
Novelty - Moderately novel - CRFs are the mainstay of POS tagging tasks and this is the first application in the gene embedding space. Though novel, more analysis is required to ascertain the need with respect to state of the art methods.
Reproducible - The paper is easily reproducible and gives clarity. |
ICLR | Title
Gene finding revisited: improved robustness through structured decoding from learning embeddings
Abstract
Gene finding is the task of identifying the locations of coding sequences within the vast amount of genetic code contained in the genome. With an ever increasing quantity of raw genome sequences, gene finding is an important avenue towards understanding the genetic information of (novel) organisms, as well as learning shared patterns across evolutionarily diverse species. The current state of the art are graphical models usually trained per organism and requiring manually curated data sets. However, these models lack the flexibility to incorporate deep learning representation learning techniques that have in recent years been transformative in the analysis of protein sequences, and which could potentially help gene finders exploit the growing number of the sequenced genomes to expand performance across multiple organisms. Here, we propose a novel approach, combining learned embeddings of raw genetic sequences with exact decoding using a latent conditional random field. We show that the model achieves performance matching the current state of the art, while increasing training robustness, and removing the need for manually fitted length distributions. As language models for DNA improve, this paves the way for more performant cross-organism gene-finders.
1 INTRODUCTION
Genes are patches of deoxyribonucleic acid (DNA) in our genome that encode functional and structural products of the cell. The central dogma of biology states that these segments are transcribed into ribonucleic acid (RNA) and in many cases translated into the amino acid sequences of proteins. In recent years, the machine learning community has dedicated considerable attention specifically to studying proteins, and solving various protein-related tasks, with the aid of deep learning. This focus has resulted in impressive advances within the field (Detlefsen et al., 2022; Jumper et al., 2021; Rao et al., 2020; Shin et al., 2021). Less attention has been paid to the DNA sequences themselves, despite the fact that finding genes in a genome remains an important open problem. Due to technological advances, the rate by which genomes are sequenced is rising much more rapidly than we can reliably annotate genes experimentally, and without proper gene annotations, we lack information about the proteins encoded in these sequences. In particular, for taxonomies that are sparsely characterised or highly diverse, such as fungi, this hinders us from extracting essential information from newly sequenced genomes.
The wealth of available genomic data suggests that this is an area ripe for a high-capacity deep learning approaches that automatically detect the most salient features in the data. This potential has in recent years been clearly demonstrated in the realm of proteins where deep learning has proven extremely effective in both the supervised setting (Alley et al., 2019; Hsu et al., 2022; Jumper et al., 2021) and in the unsupervised setting (Rao et al., 2021; 2020; Vig et al., 2021). In particular, embeddings obtained in transformer based protein language models have pushed the boundaries for performance in many downstream sequence-based prediction tasks. The advantages of such models are two-fold: 1) they enable pre-training in cases where unlabelled data far outweighs labelled data and 2) they have demonstrated the ability to learn across diverse proteins. We are currently witnessing an emerging interest in language models for DNA as well, but progress in this area has proven more difficult than for its protein counterpart. In a completely unsupervised setting the amount of DNA data is orders of magnitude larger than that of proteins, and the signals are
correspondingly sparse. For instance, Eukaryotic genomes consist of millions to billions of DNA base pairs but only a small percentage are genes and an even smaller percentage codes for protein (approx. 1% in the human genome). Genes also have an intricate structure which places demands on a high degrees of consistency between output labels predicted at different positions in the sequence. In particular, genes contain both coding segments (called CDS or exon) and intron segments. Only the coding segments are retained in the final protein, while the introns are removed. The process in which introns are removed is called splicing, which occurs after the gene is transcribed from DNA to RNA. After introns are spliced out, the RNA is translated to amino acid sequences (the protein product). Each amino acid is encoded by a codon (a triplet of DNA nucleotides) in the RNA sequence. Due to this codon structure, the annotation of the coding sequences in the gene must be extremely accurate, as shifting the frame of the codon with just one will result in a nonsensical protein. Gene prediction thus consists of correctly annotating the boundaries of a gene as well as the intron splice sites (donor/acceptor sites), a task challenged both by the imbalanced data but also by the extreme accuracy needed.
The current state-of-the-art in gene-finding relies on Hidden Markov Models (HMMs) and exact decoding (e.g. Viterbi) to ensure the required consistency among predictions at different output positions. To make these methods work well in practice, considerable effort has been put into hand-coded length distributions inside the HMM transition matrix, and a careful curation of the training data to ensure that the length statistics are representative for the genome in question. The resulting HMMs have dominated the field for more than two decades. However, their performance still leaves a lot to be desired, they are generally difficult to train, and have no mechanism for incorporating learned embeddings and context dependent learned length distributions. These models can be improved by incorporating them with external hints and constructing pipelines (Hoff et al., 2016) but they are not compatible with deep learning advents that have revolutionised adjacent fields. The goal with this paper is to develop a new approach that is compatible with contemporary deep learning practices, can be trained without manual feature engineering and careful data curation, while maintaining the capability for exact decoding.
Here we present an approach, which we term GeneDecoder, to gene prediction that is able to both incorporate prior knowledge of gene structure in the form of a latent graphs in a Conditional Random Fields as well as embeddings learned directly from the DNA sequence. This approach proves easy to train naively while still achieving high performance across a range of diverse genomes. We highlight that the resulting model is very flexible and open to improvement either by including external hints or by improvement of the current DNA sequence embeddings. We benchmark against three other supervised algorithms (Augustus, Snap, GlimmerHMM) and find that the performance of our model competes with that of the state-of-the-art (Scalzitti et al., 2020) without a strong effort put into model-selection. However, as pre-trained DNA models start to emerge and improve we expect that the full potential of this approach will be realised.
2 RELATED WORK
Current gene prediction algorithms are Hidden Markov Models (HMM) or Generalized HMMs. These include Augustus (Stanke & Waack, 2003) , Snap (Korf, 2004)., GlimmerHMM (Majoros et al., 2004) and Genemark.hmm (Borodovsky & Lomsadze, 2011). All these models are trained fully supervised and on a per-organism basis. Genemark also exists in a version that is similarly trained on one organism but in an iterative self-supervised manner (Ter-Hovhannisyan et al., 2008). In practice, gene prediction is often done through meta-prediction pipelines such as Braker (Hoff et al., 2016), Maker2 (Holt & Yandell, 2011) and Snowyowl (Reid et al., 2014), which typically combine preexisting HMMs with external hints (e.g. protein or RNA alignments) and/or iterated training. Focusing on the individual gene predictors, Augustus is currently the tool of choice in the supervised setting, according to a recent benchmark study (Scalzitti et al., 2020). It is an HMM with explicit length distributions for introns and CDS states. The model also includes multiple types of intron and exon states emitting either fixed-length sequences or sequences from a length distribution given by a specific choice of self-transition between states. This intron model has been shown to be key to its performance; without it Augustus was found to be incapable of modelling length distributions in introns correctly (Stanke & Waack, 2003). Models like Snap and GlimmerHMM follow similar ideas, but differ in their transition structure. In particular, GlimmerHMM includes a splice site model from Genesplicer (Pertea et al., 2001). These HMM-gene predictors are known to be
sensitive to the quality of the input data set. For instance, the Augustus training guidelines specifies requirements for non-redundancy, non-overlapping genes and restrictions to single transcripts per genes. These considerations are not merely theoretical. In the preparation of the baseline results for this paper, we have attempted to retrain several of these models on our own data set splits, but were unable to obtain competitive results. The Augustus guidelines also report diminishing returns for data sets with more than 1000 genes, and highlights the importance of quality over quantity in the training set in this scenario. These recommendations are sensible in a low-data regime, but we might wish to relax them as we obtain more genomic data. It would be convenient if we could train gene predictor models on lower quality data sets, including multiple, potentially conflicting transcripts arising for instance from alternative splicing. The goal of this paper is to explore modelling strategies towards this goal.
Eventually, we would also like to train cross-genome gene predictors, for instance using featurizations obtained from pre-trained language models. Following the success of pre-trained models on protein sequences, DNA language modelling is starting to emerge as a field. Currently, two such models are available: DNABert (Ji et al., 2021) and GPN (Benegas et al., 2022). Both are trained on a single organism, each across a whole genome with a sequence window of 512 nucleotides. While DNABert is a standard transformer based model for which long sequences can be computationally prohibitive, GPN (Benegas et al., 2022) is based on dilated convolutions where it could be possible to extend the sequence length. In this work we perform preliminary explorations of using GPN embeddings for gene prediction.
3 METHODS
3.1 MODEL
Latent conditional random fields Genomic datasets are highly unbalanced, containing only a very small percentage of the donor and acceptor labels. Since identification of these labels is paramount for correct annotation, the model must be designed to deal robustly with this. Furthermore, we wish to encode prior knowledge of gene structure in order to produce consistent predictions. Finally, the model must be flexible in the type of input used, so that context dependent embeddings can be learned directly from the sequence. To fulfil these requirements, we choose a Linear chain Conditional Random Field (CRF) (Lafferty et al., 2001) model. The CRF can, like the HMM, encode gene structure in its graph, but because it models the conditional probability of the output labels, rather than the joint distribution of input and output, it is well suited for integration with a embedding model. In doing so, we can hope to learn a richer representation of the input, while still training the model end-to-end.
The input to the embedding model are one-hot encoded values X = {A,C,G, T,N}, where N is unknown nucleotides. As output labels, we have Exon (E), Intron (I), Donor (D), Acceptor (A) and Non-coding (NC). The CRF learns transition potentials (i.e. unnormalized transition probabilities) between these states, but we can manually set particular of these potentials to − inf to allow only biologically meaningful transitions, thus following a similar strategy as that employed in classic HMM gene predictors. Two important restrictions are 1) directionality, ensuring that forward and reverse coded genes are not intermixed, and 2) codon-awareness, which ensures that the coding
regions of genes are always a multiple of three. To encode this in the transition potentials, we expand the label set (Table 1). Note that in addition to the exon states, it is necessary to expand also the intron, acceptor and donor states such that codon position can be tracked through the intron regions.
The likelihood for a CRF takes the form:
P (y|x, θ) = 1 Z exp N∑ n=1 ( Tyn−1,yn + θ(xn) ) (1)
T is the learned transition weights of the observed labels and θ the learned input embedding. The partition function Z is the summed score over all possible label sequences:
Z = ∑ y′∈Y exp ∑ n= 1N ( Ty′n−1,y′n + θ(xn) ) (2)
In the standard CRF, the states appearing in the transition table will be observed during training. In our case, we do not wish to specify the direction and codon-index apriori. HMM-based genefinders solve this problem by formulating the Markov process in a latent discrete space, and coupling it to the output states through a learned emission probability table. A similar solution is not easily tractable for CRFs. However, as shown by Morency et al, it becomes feasible if we introduce a many-to-one deterministic mapping between latent and observed states (Morency et al., 2007). This is a natural assumption to make in our case, as illustrated by the individual rows in Table 1. The corresponding Latent CRF (L-CRF), includes a summation over the set of latent states emitting a given observed state.
P (y|x, θ) = 1 Z exp N∑ n=1 ∑ h′∈Hyn−1 ∑ h∈Hyn (Th′,h + Eh,yn) + θ(xn) (3) where Hyn denotes the set of hidden states emitting the value taken by yn. The learned transition matrix T is now over the hidden states and the emission matrix E denotes the emission potentials from hidden to observed states. Similarly, the partition function now also sums over the set of hidden
states that emits y:
Z = ∑ y′∈Y exp N∑ n=1 ∑ h′∈Hyn−1 ∑ h∈Hyn (Th′,h + Eh,yn) + θ(xn) (4) In our case, the Hyn matrix is given by the rows in Table 1. The non-blocked entries in the transition matrix T are learned (see examples of an adjacency matrices in Figure A.2). We assume that gene structure is similar on the forward and reverse strands and therefore share the learned transition potentials between the two directions. Figure 1 shows a full overview of our model. In the remainder of the paper, we will refer to this model decoding architecture as CRF and L-CRF interchangably.
Feature Model Embeddings Although the embedding function θ in a CRF was traditionally based on manually engineered features, modern autograd frameworks make it possible to learn such functions using e.g. neural networks, and train both the featurization and decoder models end-to-end. This allows us to use a complex, context dependent mapping from the one-hot encoded input sequence as the conditioning input to our CRF. As a proof of concept, we here use a simple combination of a (dilated) CNN and an LSTM. The former ensures that we see beyond single nucleotides, avoiding the need to manually encode at the codon level, and the latter models the remaining sequential signal. We disregarded more expressive architectures such as transformers because we in the purely-supervised setting are in a limited data regime. However, we anticipate that pre-trained language models for DNA will eventually replace the need for learning new feature embeddings per organism (actually, one could meaningfully define it as a success criterion for DNA language models). We briefly explore this potential in the result section below.
Training The model were trained using AdamW on the on the Negative Log Likelihood, using a batchsize of 64. We used early stopping based on the validation set likelihood to avoid overfitting.
Inference Viterbi decoding is used to find the best label sequence under the model. Alternatively sampling can be performed according to the label probabilities, either across the entire sequence or for segments. Posterior marginal probabilities per position are calculated with the forward-backward algorithm and can be used to assess model certainty for specific positions.
Choice of performance metrics Accurately predicting the direction, as well as the donor and acceptor sites (i.e. start and end of introns), is highly important for predicting the correct protein. For this reason Matthews Correlation Coefficient as well as Macro weighted F1 score is reported a the label set containing exon/CDS, intron, donor and acceptor labels.
4 EXPERIMENTS
4.1 DATA
Sequence and general feature format (gff3) annotation files are obtained for each genome as follows: Human - Gencode, Aspergillus Nidulans and Saccharomyces Cerevisiae - EnsembleFungi, Arabidopsis Thaliana - EnsemblPlants, C. Elegans Wormbase (Frankish et al., 2021; Cunningham et al., 2022; Howe et al., 2017).
No augmentation or inspection of the data was performed, with the exception of ensuring the total length of CDS segments in each gene was divisible by 3, ensuring that valid protein product was possible. Gene annotations were extracted with a random flanks of a length between 1-2000 nucleotides. These relatively short flank lengths were chosen to decrease training time. The maximum allowed gene length was capped according to the gene length distribution in the species to ensure a Representative data set but simultaneously exclude extreme outliers in gene length that would heavily increase compute time. The data set was homology partitioned at a threshold of 80% protein sequence similarity into train, validation and test sets. The test sets were used for benchmarking. The length of the flank can influence the performance depending on the inference task. For some ablation experiments simpler feature models have been chosen in order to reduce compute, these results may therefore reflect only a relative performance rather than the full potential.
4.2 BENCHMARK ALGORITHMS
We benchmark our method against three other widely used gene prediction software (Augustus, GeneMark.hmm, Snap and GlimmerHMM). Given the sensitivity of these methods to the training sets, pre-trained parameter set for each species was used where available. This means we cannot rule out that sequences in our test set are part of the training set of the baseline models. Our reported results are thus a conservative estimate of our relative performance. There exists several versions of the GeneMark algorithm depending on the desired user case (Borodovsky & Lomsadze, 2011; Besemer et al., 2001; Lukashin & Borodovsky, 1998). We chose GeneMark.hmm as this is a supervised pre-trained per species algorithm, which matches the scope here.
4.3 BENCHMARKING ACROSS EVOLUTIONARILY DIVERSE SPECIES
To demonstrate the performance of our approach we benchmark against four widely used programs on five phylogenetically diverse well-studied species typically used in benchmarking (Table 2). Our model, GeneDecoder, was trained separately on genes from each organism, which is the standard for such gene-finders (Stanke et al., 2006; Borodovsky & Lomsadze, 2011). For the other models, pretrained versions of the programs for the given species, were used in order to not reduce performance by re-training on a data set that was insufficiently curated for their model.
We find that only Augustus achieves similar performance to our model, with Snap performing markedly lower across all species we it tested on. GlimmerHMM and GeneMark.hmm shows better performance but it still has significantly lower predictive power compared to Augustus. This further cements previous findings that Augustus achieves state of the art performance (Scalzitti et al., 2020).
Our model, GeneDecoder competes with or outperforms Augustus, which is a state-of-the-art gene prediction software, across the benchmark, with the exception of the lower F1 score for S. Cerevisiae. The low F1 score for S. Cerevisiae across all algorithms is due to the low number of introncontaining genes. Less than 1% genes contain introns in the data set for this organism. Since the F1 score is highly influenced by performance on underrepresented label categories this reduced performance comes from poor prediction of introns and donor/acceptor sites. It was not possible to obtain Augustus’ training data set for this species, but according to the official training instructions of the software, data sets must be curated to contain a high proportion of intron-containing genes. This would explain discrepancy in the F1 score for Augustus and GeneDecoder, however it also highlights that our model is not highly influenced by the quality of the data set. We expect that the perfomance of GeneDecoder would improve with a curated dataset or oversampling of intron containing genes.
We further compare Augustus with our model by training and testing on the original Augustus data sets for Human (Stanke, 2004). The training data set is in a low data regime, consisting of 1284 training genes, which is not ideal for deep learning settings. Furthermore, the Augustus data set is strictly homology reduced, allowing no more than 80% sequence similarity at the protein level. Nevertheless, our model matches the performance of Augustus on its own data set (see Figure A.5). The difference in predictive performance on non coding labels are likely due to a manually set selftransition probability of 0.99 for non coding labels in the Augustus model. This improves Augustus’ performance on non coding labels but might come at the expense of predicting other labels. While improving Augustus performance requires the use of external hints chosen by the user, our model may be improved by learning better embeddings.
4.4 LEARNED EMBEDDINGS IMPROVE PREDICTIONS
To explore the performance contribution of different aspects of our model we performed ablations, where parts of the model were excluded or simplified part of it (Table 3). Note that these models were trained on a smaller set of the A. thaliana data set.
First, to test the capabilities of the CRF alone, a single linear layer is implemented to map each position from the one-hot encoded DNA sequence to the hidden states. Two graphs of different complexity were used for this experiment (see Table 1, Figure A.2 and A.2 for graphs). While both Linear-CRFs exhibit very poor predictive power, the more complex codon graph, which encodes knowledge of codon structure, clearly performs better that the simple graph, illustrating the need
for a higher graph complexity to improve modelling performance. This is consistent with similar observations made for HMM-based gene predictors Stanke & Waack (2003).
By introducing a feature model such as an LSTM, performance is greatly enhanced. While an LSTM trained alone also displays decent performance it is evident from Figure A.5 that without the CRF, splice sites are often not labelled correctly which is vital for producing consistent predictions.
Interestingly training a linear layer on fixed embeddings from the GPN language model (Benegas et al., 2022) achieves high performance (Figure A.5). This indicates that the masked pre-training of the GPN model captures DNA sequence patterns relevant for gene prediction, despite only having trained on a single organism and primarily on non-coding DNA. The emergent field of DNA language modelling could have a significant effect on downstream DNA modelling tasks, as they have had in the protein field. These ablation studies reveal that with a good feature model, there is less need for as complex a graph structure. The feature model can instead learn complex dependencies directly from the sequence.
The best performing model, which we refer to as GeneDecoder, uses a feature model with a combination of dilated convolutions as well as a bidirectional LSTM, which we al. We hypothesise that the increased performance over a LSTM alone is due to better long range information provided by the dilated convolutions. The details of the architecture are available in Appendix A.4).
4.5 LEARNED EMBEDDINGS ACCURATELY MODEL LENGTH DISTRIBUTIONS
Accurately capturing length distributions of introns and exons is essential for a well performing gene predictor. Along with explicitly modelling exon length distributions, Stanke & Waack (2003) had to introduce an intron sub-model with states modelling both explicit length distributions as well as fixed length emissions, to meaningfully capture intron lengths.
We find that our model readily learns these signals directly from data (Figure (4.5)). The latent CRF without learned embedding is not sufficient to reproduce the true lengths resulting in a flattened distribution similar to Stanke & Waack (2003). However, with the learned embeddings, GeneDecoder captures the length distributions faithfully, especially for exons. It is very promising that the pretrained embeddings GPN model is also able to capture the length distributions accurately. The GPN embeddings were not fine-tuned in this case. This further cements that incorporating deep learning, and particularly unsupervised pre-trained embeddings, is a promising direction for GeneDecoder and
genefinders in general. Especially when expanding the model to perform cross-species prediction. However, DNA language modelling is still a nascent field and there are currently many unanswered questions on how to transfer the masked modelling concept to DNA sequences as compared to proteins. These concern both the sparsity of information, the longer sequences and the much longer range signal that likely needs to be captured for high performance in many downstream tasks. The GPN model is trained on DNA segments of 512 nucleotides which might exclude capturing many longer range signals and affect performance.
4.6 LATENT STRUCTURE CAN INFER UNOBSERVED LABELS
To test the capacity of the latent graph of our model we explore whether it can learn unobserved states directly from the sequence. One such example is inferring the directionality of genes. Genes can be encoded on both strands of the double-helical DNA but usually only one strand is sequenced. Genes that lie in the ”reading” direction of sequenced strand are designated as forward and genes that would be read on the opposite strand as reverse. We train our model on a directionless set of observed labels (see simple-labels-DA in Appendix A.1). Since the the training data does not provide any information about directionality, the model must learn this directly from the sequence to be able to distinguish between directions.
Figure 4.5 4 shows the comparison between our model trained on the directionless label set as well as a set including directional labels (see simple-direction-DA in Appendix A.1). Although there is a reduction in performance between the two training regiments, the model is still able to learn and infer direction. Surprisingly the reduced performance does not originate from confusion between forward and reverse labels but rather due to the model’s propensity to over predict non-coding labels (see A.5).
5 CONCLUSION
Here we present a novel approach, GeneDecoder, to gene prediction combining the advantages of graphical models with the flexibility and potential of deep learned embeddings.
We find that the Latent CRF gives us not only the ability to preserve the graph-encoding of prior knowledge, as in the state-of-the-art HMMs gene predictors, but also offers an advantage by allowing us to learn embeddings directly from the sequence. This advantage is not only visible in the performance, but also in the low effort required to construct data sets and train the model. The greatest advantage, however, might well be the great flexibility of the learned embeddings and the potential to improve them as the DNA modelling field advances.
Even now this seems evident from the preliminary studies performed here using embeddings from a DNA language model trained only on a single species, with very short input sequences. We expect that as the field progresses language models more specifically tailored to DNA will emerge, modelling longer range interactions and training across multiple species. Such results have already been demonstrated by the success of the Enformer model (Avsec et al., 2021), which demonstrated the importance of long range information as well as the ability to learn downstream tasks that were not trained on.
One potential limitation of our model is that the amount of non-coding DNA (in our cases the flanking regions) in the training data set can affect the performance of the model on data with significantly more non-coding DNA. This issue can be resolved in the training process by scaling the training to larger data sets containing longer non-coding regions. As this presents a technical challenge, along with a time and resource challenge, rather than a scientific one we leave it for future work. Our model provides the capacity to include external hints in the inference process. Either via sampling high probability pathways around fixed labels but also via modification of the hidden state probabilities. The latter method can even be used as a strategy during training, highlighting the flexibility of out model. Lastly, model per position certainty is directly provided model in the form of posterior marginal probabilities. We leave it for future exploration to rigorously benchmark this.
We also expect that language modelling will alleviate issues like this by modelling far better and more informative embeddings of the raw sequence. But we leave this for future work. In this regard we view the model presented here as an initial iteration that can only improve as DNA modelling does. We anticipate that future iterations of GeneDecoder will incorporate some form of self-supervised pre-training across various species which could vastly improve not only prediction tasks in species with rich gene annotation data but also in novel or understudied organisms.
6 CODE AVAILABILITY
Code will be available on github upon publication.
A APPENDIX
A.1 LABEL SETS
Generally five types of labels are used:
E : Exon/CDS (i.e. the coding part of a sequence)
I : Intron
D : Donor site (one of the two splice sites)
A : Acceptor site (one of the two splice sites)
NC : Non-coding or intergenic regions of the genome.
Subscript F and R (e.g. EF and EF ) are used to denote the forward and reverse direction of the gene. Numbered subscripts are used to denote codon structure (i.e. E1,F is the first exon nucleotide in a codon in the forward direction).
Simple-DA:
The simplest set of states used. Y = { E,D, I,A,NC } Simple-direction-DA:
Expands the set of states to include information about the direction/strand of the gene. Y = { EF , DF , IF , AF , ER, ER, IR, AR, NC } Codon-direction-DA:
Further expands the set states to include not only direction but also codon structure. Y = { E1,F , E2,F , E3,F , D1,F , D2,F , D3,F , I1,F , I2,F , I3,F , A1,F , A2,F , A3,F ,
E1,R, E2,R, E3,R, D1,R, D2,R, D3,R, I1,R, I2,R, I3,R, A1,R, A2,R, A3,R, NC }
A.2 GRAPHS
Graphs are represented as adjacency matrices and are named according to the label sets described in Appendix A.1. A grey cell signifies a forbidden transition while the weights of the green cells are learned by model.
A.3 EMISSIONS
A grey cell signifies a forbidden emission while the weights of the green cells are set to 1 (i.e. an allowed emissions).
A.4 GENEDECODER FEATURE MODEL ARCHITECHTURE
Feature Model
Where |X| is the number of channels in the input, and |H| is the size of the set of hidden states. In most cases X = {A,C,G, T,N}.
DilatedCNN(
Layer 1 : Conv1d(in channels = |X|, out channels = 50, kernel size = (9,), stride = 1, dilation = 1), ReLU
Layer 2 : Conv1d(in channels = 50, out channels = 50, kernel size = (9,), stride = 1, dilation = 2), ReLU
Layer 3 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 4), ReLU
Layer 4 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 8), ReLU
Layer 5 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 16), ReLU
Layer 6 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 32), ReLU
)
LSTM(
LSTM layer : lstm(50, hidden layers = 100, bidirectional = True)
Output layer : linear(input size = 200, output size = |H|, bias = True, dropout(0.2)), ReLU
)
A.5 SUPPLEMENTARY FIGURES | 1. What is the focus of the paper regarding gene finding and deep learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in its comparison to previous works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the paper's motivation, methodology, or results? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper develops a new algorithm for gene finding which is compatible with embeddings obtained from deep and transfer learning.
Strengths And Weaknesses
The paper argues the prior models for gene prediction “lack the flexibility to incorporate deep learning representation learning techniques” as a potential drawback. This sets the motivation of the paper to develop a deep learning compatible gene prediction model. However I do not see why deep learning compatibility, per se, is a drawback of the prior works. Thus, in my opinion, the paper is poorly motivated from both the biological and methodological perspectives. The issue becomes even more problematic, as the paper demonstrates that the proposed deep learning-compatible method does not significantly improve the performance in any tangible aspect over the baseline considered. Unfortunately, all these are on top of the fact that the paper misses a key family of baseline algorithms for gene finding, i.e., GeneMark, GeneMark-S, etc. which is discussed in the paper but never compared with.
Clarity, Quality, Novelty And Reproducibility
The quality is not above acceptance threshold for ICLR. |
ICLR | Title
Gene finding revisited: improved robustness through structured decoding from learning embeddings
Abstract
Gene finding is the task of identifying the locations of coding sequences within the vast amount of genetic code contained in the genome. With an ever increasing quantity of raw genome sequences, gene finding is an important avenue towards understanding the genetic information of (novel) organisms, as well as learning shared patterns across evolutionarily diverse species. The current state of the art are graphical models usually trained per organism and requiring manually curated data sets. However, these models lack the flexibility to incorporate deep learning representation learning techniques that have in recent years been transformative in the analysis of protein sequences, and which could potentially help gene finders exploit the growing number of the sequenced genomes to expand performance across multiple organisms. Here, we propose a novel approach, combining learned embeddings of raw genetic sequences with exact decoding using a latent conditional random field. We show that the model achieves performance matching the current state of the art, while increasing training robustness, and removing the need for manually fitted length distributions. As language models for DNA improve, this paves the way for more performant cross-organism gene-finders.
1 INTRODUCTION
Genes are patches of deoxyribonucleic acid (DNA) in our genome that encode functional and structural products of the cell. The central dogma of biology states that these segments are transcribed into ribonucleic acid (RNA) and in many cases translated into the amino acid sequences of proteins. In recent years, the machine learning community has dedicated considerable attention specifically to studying proteins, and solving various protein-related tasks, with the aid of deep learning. This focus has resulted in impressive advances within the field (Detlefsen et al., 2022; Jumper et al., 2021; Rao et al., 2020; Shin et al., 2021). Less attention has been paid to the DNA sequences themselves, despite the fact that finding genes in a genome remains an important open problem. Due to technological advances, the rate by which genomes are sequenced is rising much more rapidly than we can reliably annotate genes experimentally, and without proper gene annotations, we lack information about the proteins encoded in these sequences. In particular, for taxonomies that are sparsely characterised or highly diverse, such as fungi, this hinders us from extracting essential information from newly sequenced genomes.
The wealth of available genomic data suggests that this is an area ripe for a high-capacity deep learning approaches that automatically detect the most salient features in the data. This potential has in recent years been clearly demonstrated in the realm of proteins where deep learning has proven extremely effective in both the supervised setting (Alley et al., 2019; Hsu et al., 2022; Jumper et al., 2021) and in the unsupervised setting (Rao et al., 2021; 2020; Vig et al., 2021). In particular, embeddings obtained in transformer based protein language models have pushed the boundaries for performance in many downstream sequence-based prediction tasks. The advantages of such models are two-fold: 1) they enable pre-training in cases where unlabelled data far outweighs labelled data and 2) they have demonstrated the ability to learn across diverse proteins. We are currently witnessing an emerging interest in language models for DNA as well, but progress in this area has proven more difficult than for its protein counterpart. In a completely unsupervised setting the amount of DNA data is orders of magnitude larger than that of proteins, and the signals are
correspondingly sparse. For instance, Eukaryotic genomes consist of millions to billions of DNA base pairs but only a small percentage are genes and an even smaller percentage codes for protein (approx. 1% in the human genome). Genes also have an intricate structure which places demands on a high degrees of consistency between output labels predicted at different positions in the sequence. In particular, genes contain both coding segments (called CDS or exon) and intron segments. Only the coding segments are retained in the final protein, while the introns are removed. The process in which introns are removed is called splicing, which occurs after the gene is transcribed from DNA to RNA. After introns are spliced out, the RNA is translated to amino acid sequences (the protein product). Each amino acid is encoded by a codon (a triplet of DNA nucleotides) in the RNA sequence. Due to this codon structure, the annotation of the coding sequences in the gene must be extremely accurate, as shifting the frame of the codon with just one will result in a nonsensical protein. Gene prediction thus consists of correctly annotating the boundaries of a gene as well as the intron splice sites (donor/acceptor sites), a task challenged both by the imbalanced data but also by the extreme accuracy needed.
The current state-of-the-art in gene-finding relies on Hidden Markov Models (HMMs) and exact decoding (e.g. Viterbi) to ensure the required consistency among predictions at different output positions. To make these methods work well in practice, considerable effort has been put into hand-coded length distributions inside the HMM transition matrix, and a careful curation of the training data to ensure that the length statistics are representative for the genome in question. The resulting HMMs have dominated the field for more than two decades. However, their performance still leaves a lot to be desired, they are generally difficult to train, and have no mechanism for incorporating learned embeddings and context dependent learned length distributions. These models can be improved by incorporating them with external hints and constructing pipelines (Hoff et al., 2016) but they are not compatible with deep learning advents that have revolutionised adjacent fields. The goal with this paper is to develop a new approach that is compatible with contemporary deep learning practices, can be trained without manual feature engineering and careful data curation, while maintaining the capability for exact decoding.
Here we present an approach, which we term GeneDecoder, to gene prediction that is able to both incorporate prior knowledge of gene structure in the form of a latent graphs in a Conditional Random Fields as well as embeddings learned directly from the DNA sequence. This approach proves easy to train naively while still achieving high performance across a range of diverse genomes. We highlight that the resulting model is very flexible and open to improvement either by including external hints or by improvement of the current DNA sequence embeddings. We benchmark against three other supervised algorithms (Augustus, Snap, GlimmerHMM) and find that the performance of our model competes with that of the state-of-the-art (Scalzitti et al., 2020) without a strong effort put into model-selection. However, as pre-trained DNA models start to emerge and improve we expect that the full potential of this approach will be realised.
2 RELATED WORK
Current gene prediction algorithms are Hidden Markov Models (HMM) or Generalized HMMs. These include Augustus (Stanke & Waack, 2003) , Snap (Korf, 2004)., GlimmerHMM (Majoros et al., 2004) and Genemark.hmm (Borodovsky & Lomsadze, 2011). All these models are trained fully supervised and on a per-organism basis. Genemark also exists in a version that is similarly trained on one organism but in an iterative self-supervised manner (Ter-Hovhannisyan et al., 2008). In practice, gene prediction is often done through meta-prediction pipelines such as Braker (Hoff et al., 2016), Maker2 (Holt & Yandell, 2011) and Snowyowl (Reid et al., 2014), which typically combine preexisting HMMs with external hints (e.g. protein or RNA alignments) and/or iterated training. Focusing on the individual gene predictors, Augustus is currently the tool of choice in the supervised setting, according to a recent benchmark study (Scalzitti et al., 2020). It is an HMM with explicit length distributions for introns and CDS states. The model also includes multiple types of intron and exon states emitting either fixed-length sequences or sequences from a length distribution given by a specific choice of self-transition between states. This intron model has been shown to be key to its performance; without it Augustus was found to be incapable of modelling length distributions in introns correctly (Stanke & Waack, 2003). Models like Snap and GlimmerHMM follow similar ideas, but differ in their transition structure. In particular, GlimmerHMM includes a splice site model from Genesplicer (Pertea et al., 2001). These HMM-gene predictors are known to be
sensitive to the quality of the input data set. For instance, the Augustus training guidelines specifies requirements for non-redundancy, non-overlapping genes and restrictions to single transcripts per genes. These considerations are not merely theoretical. In the preparation of the baseline results for this paper, we have attempted to retrain several of these models on our own data set splits, but were unable to obtain competitive results. The Augustus guidelines also report diminishing returns for data sets with more than 1000 genes, and highlights the importance of quality over quantity in the training set in this scenario. These recommendations are sensible in a low-data regime, but we might wish to relax them as we obtain more genomic data. It would be convenient if we could train gene predictor models on lower quality data sets, including multiple, potentially conflicting transcripts arising for instance from alternative splicing. The goal of this paper is to explore modelling strategies towards this goal.
Eventually, we would also like to train cross-genome gene predictors, for instance using featurizations obtained from pre-trained language models. Following the success of pre-trained models on protein sequences, DNA language modelling is starting to emerge as a field. Currently, two such models are available: DNABert (Ji et al., 2021) and GPN (Benegas et al., 2022). Both are trained on a single organism, each across a whole genome with a sequence window of 512 nucleotides. While DNABert is a standard transformer based model for which long sequences can be computationally prohibitive, GPN (Benegas et al., 2022) is based on dilated convolutions where it could be possible to extend the sequence length. In this work we perform preliminary explorations of using GPN embeddings for gene prediction.
3 METHODS
3.1 MODEL
Latent conditional random fields Genomic datasets are highly unbalanced, containing only a very small percentage of the donor and acceptor labels. Since identification of these labels is paramount for correct annotation, the model must be designed to deal robustly with this. Furthermore, we wish to encode prior knowledge of gene structure in order to produce consistent predictions. Finally, the model must be flexible in the type of input used, so that context dependent embeddings can be learned directly from the sequence. To fulfil these requirements, we choose a Linear chain Conditional Random Field (CRF) (Lafferty et al., 2001) model. The CRF can, like the HMM, encode gene structure in its graph, but because it models the conditional probability of the output labels, rather than the joint distribution of input and output, it is well suited for integration with a embedding model. In doing so, we can hope to learn a richer representation of the input, while still training the model end-to-end.
The input to the embedding model are one-hot encoded values X = {A,C,G, T,N}, where N is unknown nucleotides. As output labels, we have Exon (E), Intron (I), Donor (D), Acceptor (A) and Non-coding (NC). The CRF learns transition potentials (i.e. unnormalized transition probabilities) between these states, but we can manually set particular of these potentials to − inf to allow only biologically meaningful transitions, thus following a similar strategy as that employed in classic HMM gene predictors. Two important restrictions are 1) directionality, ensuring that forward and reverse coded genes are not intermixed, and 2) codon-awareness, which ensures that the coding
regions of genes are always a multiple of three. To encode this in the transition potentials, we expand the label set (Table 1). Note that in addition to the exon states, it is necessary to expand also the intron, acceptor and donor states such that codon position can be tracked through the intron regions.
The likelihood for a CRF takes the form:
P (y|x, θ) = 1 Z exp N∑ n=1 ( Tyn−1,yn + θ(xn) ) (1)
T is the learned transition weights of the observed labels and θ the learned input embedding. The partition function Z is the summed score over all possible label sequences:
Z = ∑ y′∈Y exp ∑ n= 1N ( Ty′n−1,y′n + θ(xn) ) (2)
In the standard CRF, the states appearing in the transition table will be observed during training. In our case, we do not wish to specify the direction and codon-index apriori. HMM-based genefinders solve this problem by formulating the Markov process in a latent discrete space, and coupling it to the output states through a learned emission probability table. A similar solution is not easily tractable for CRFs. However, as shown by Morency et al, it becomes feasible if we introduce a many-to-one deterministic mapping between latent and observed states (Morency et al., 2007). This is a natural assumption to make in our case, as illustrated by the individual rows in Table 1. The corresponding Latent CRF (L-CRF), includes a summation over the set of latent states emitting a given observed state.
P (y|x, θ) = 1 Z exp N∑ n=1 ∑ h′∈Hyn−1 ∑ h∈Hyn (Th′,h + Eh,yn) + θ(xn) (3) where Hyn denotes the set of hidden states emitting the value taken by yn. The learned transition matrix T is now over the hidden states and the emission matrix E denotes the emission potentials from hidden to observed states. Similarly, the partition function now also sums over the set of hidden
states that emits y:
Z = ∑ y′∈Y exp N∑ n=1 ∑ h′∈Hyn−1 ∑ h∈Hyn (Th′,h + Eh,yn) + θ(xn) (4) In our case, the Hyn matrix is given by the rows in Table 1. The non-blocked entries in the transition matrix T are learned (see examples of an adjacency matrices in Figure A.2). We assume that gene structure is similar on the forward and reverse strands and therefore share the learned transition potentials between the two directions. Figure 1 shows a full overview of our model. In the remainder of the paper, we will refer to this model decoding architecture as CRF and L-CRF interchangably.
Feature Model Embeddings Although the embedding function θ in a CRF was traditionally based on manually engineered features, modern autograd frameworks make it possible to learn such functions using e.g. neural networks, and train both the featurization and decoder models end-to-end. This allows us to use a complex, context dependent mapping from the one-hot encoded input sequence as the conditioning input to our CRF. As a proof of concept, we here use a simple combination of a (dilated) CNN and an LSTM. The former ensures that we see beyond single nucleotides, avoiding the need to manually encode at the codon level, and the latter models the remaining sequential signal. We disregarded more expressive architectures such as transformers because we in the purely-supervised setting are in a limited data regime. However, we anticipate that pre-trained language models for DNA will eventually replace the need for learning new feature embeddings per organism (actually, one could meaningfully define it as a success criterion for DNA language models). We briefly explore this potential in the result section below.
Training The model were trained using AdamW on the on the Negative Log Likelihood, using a batchsize of 64. We used early stopping based on the validation set likelihood to avoid overfitting.
Inference Viterbi decoding is used to find the best label sequence under the model. Alternatively sampling can be performed according to the label probabilities, either across the entire sequence or for segments. Posterior marginal probabilities per position are calculated with the forward-backward algorithm and can be used to assess model certainty for specific positions.
Choice of performance metrics Accurately predicting the direction, as well as the donor and acceptor sites (i.e. start and end of introns), is highly important for predicting the correct protein. For this reason Matthews Correlation Coefficient as well as Macro weighted F1 score is reported a the label set containing exon/CDS, intron, donor and acceptor labels.
4 EXPERIMENTS
4.1 DATA
Sequence and general feature format (gff3) annotation files are obtained for each genome as follows: Human - Gencode, Aspergillus Nidulans and Saccharomyces Cerevisiae - EnsembleFungi, Arabidopsis Thaliana - EnsemblPlants, C. Elegans Wormbase (Frankish et al., 2021; Cunningham et al., 2022; Howe et al., 2017).
No augmentation or inspection of the data was performed, with the exception of ensuring the total length of CDS segments in each gene was divisible by 3, ensuring that valid protein product was possible. Gene annotations were extracted with a random flanks of a length between 1-2000 nucleotides. These relatively short flank lengths were chosen to decrease training time. The maximum allowed gene length was capped according to the gene length distribution in the species to ensure a Representative data set but simultaneously exclude extreme outliers in gene length that would heavily increase compute time. The data set was homology partitioned at a threshold of 80% protein sequence similarity into train, validation and test sets. The test sets were used for benchmarking. The length of the flank can influence the performance depending on the inference task. For some ablation experiments simpler feature models have been chosen in order to reduce compute, these results may therefore reflect only a relative performance rather than the full potential.
4.2 BENCHMARK ALGORITHMS
We benchmark our method against three other widely used gene prediction software (Augustus, GeneMark.hmm, Snap and GlimmerHMM). Given the sensitivity of these methods to the training sets, pre-trained parameter set for each species was used where available. This means we cannot rule out that sequences in our test set are part of the training set of the baseline models. Our reported results are thus a conservative estimate of our relative performance. There exists several versions of the GeneMark algorithm depending on the desired user case (Borodovsky & Lomsadze, 2011; Besemer et al., 2001; Lukashin & Borodovsky, 1998). We chose GeneMark.hmm as this is a supervised pre-trained per species algorithm, which matches the scope here.
4.3 BENCHMARKING ACROSS EVOLUTIONARILY DIVERSE SPECIES
To demonstrate the performance of our approach we benchmark against four widely used programs on five phylogenetically diverse well-studied species typically used in benchmarking (Table 2). Our model, GeneDecoder, was trained separately on genes from each organism, which is the standard for such gene-finders (Stanke et al., 2006; Borodovsky & Lomsadze, 2011). For the other models, pretrained versions of the programs for the given species, were used in order to not reduce performance by re-training on a data set that was insufficiently curated for their model.
We find that only Augustus achieves similar performance to our model, with Snap performing markedly lower across all species we it tested on. GlimmerHMM and GeneMark.hmm shows better performance but it still has significantly lower predictive power compared to Augustus. This further cements previous findings that Augustus achieves state of the art performance (Scalzitti et al., 2020).
Our model, GeneDecoder competes with or outperforms Augustus, which is a state-of-the-art gene prediction software, across the benchmark, with the exception of the lower F1 score for S. Cerevisiae. The low F1 score for S. Cerevisiae across all algorithms is due to the low number of introncontaining genes. Less than 1% genes contain introns in the data set for this organism. Since the F1 score is highly influenced by performance on underrepresented label categories this reduced performance comes from poor prediction of introns and donor/acceptor sites. It was not possible to obtain Augustus’ training data set for this species, but according to the official training instructions of the software, data sets must be curated to contain a high proportion of intron-containing genes. This would explain discrepancy in the F1 score for Augustus and GeneDecoder, however it also highlights that our model is not highly influenced by the quality of the data set. We expect that the perfomance of GeneDecoder would improve with a curated dataset or oversampling of intron containing genes.
We further compare Augustus with our model by training and testing on the original Augustus data sets for Human (Stanke, 2004). The training data set is in a low data regime, consisting of 1284 training genes, which is not ideal for deep learning settings. Furthermore, the Augustus data set is strictly homology reduced, allowing no more than 80% sequence similarity at the protein level. Nevertheless, our model matches the performance of Augustus on its own data set (see Figure A.5). The difference in predictive performance on non coding labels are likely due to a manually set selftransition probability of 0.99 for non coding labels in the Augustus model. This improves Augustus’ performance on non coding labels but might come at the expense of predicting other labels. While improving Augustus performance requires the use of external hints chosen by the user, our model may be improved by learning better embeddings.
4.4 LEARNED EMBEDDINGS IMPROVE PREDICTIONS
To explore the performance contribution of different aspects of our model we performed ablations, where parts of the model were excluded or simplified part of it (Table 3). Note that these models were trained on a smaller set of the A. thaliana data set.
First, to test the capabilities of the CRF alone, a single linear layer is implemented to map each position from the one-hot encoded DNA sequence to the hidden states. Two graphs of different complexity were used for this experiment (see Table 1, Figure A.2 and A.2 for graphs). While both Linear-CRFs exhibit very poor predictive power, the more complex codon graph, which encodes knowledge of codon structure, clearly performs better that the simple graph, illustrating the need
for a higher graph complexity to improve modelling performance. This is consistent with similar observations made for HMM-based gene predictors Stanke & Waack (2003).
By introducing a feature model such as an LSTM, performance is greatly enhanced. While an LSTM trained alone also displays decent performance it is evident from Figure A.5 that without the CRF, splice sites are often not labelled correctly which is vital for producing consistent predictions.
Interestingly training a linear layer on fixed embeddings from the GPN language model (Benegas et al., 2022) achieves high performance (Figure A.5). This indicates that the masked pre-training of the GPN model captures DNA sequence patterns relevant for gene prediction, despite only having trained on a single organism and primarily on non-coding DNA. The emergent field of DNA language modelling could have a significant effect on downstream DNA modelling tasks, as they have had in the protein field. These ablation studies reveal that with a good feature model, there is less need for as complex a graph structure. The feature model can instead learn complex dependencies directly from the sequence.
The best performing model, which we refer to as GeneDecoder, uses a feature model with a combination of dilated convolutions as well as a bidirectional LSTM, which we al. We hypothesise that the increased performance over a LSTM alone is due to better long range information provided by the dilated convolutions. The details of the architecture are available in Appendix A.4).
4.5 LEARNED EMBEDDINGS ACCURATELY MODEL LENGTH DISTRIBUTIONS
Accurately capturing length distributions of introns and exons is essential for a well performing gene predictor. Along with explicitly modelling exon length distributions, Stanke & Waack (2003) had to introduce an intron sub-model with states modelling both explicit length distributions as well as fixed length emissions, to meaningfully capture intron lengths.
We find that our model readily learns these signals directly from data (Figure (4.5)). The latent CRF without learned embedding is not sufficient to reproduce the true lengths resulting in a flattened distribution similar to Stanke & Waack (2003). However, with the learned embeddings, GeneDecoder captures the length distributions faithfully, especially for exons. It is very promising that the pretrained embeddings GPN model is also able to capture the length distributions accurately. The GPN embeddings were not fine-tuned in this case. This further cements that incorporating deep learning, and particularly unsupervised pre-trained embeddings, is a promising direction for GeneDecoder and
genefinders in general. Especially when expanding the model to perform cross-species prediction. However, DNA language modelling is still a nascent field and there are currently many unanswered questions on how to transfer the masked modelling concept to DNA sequences as compared to proteins. These concern both the sparsity of information, the longer sequences and the much longer range signal that likely needs to be captured for high performance in many downstream tasks. The GPN model is trained on DNA segments of 512 nucleotides which might exclude capturing many longer range signals and affect performance.
4.6 LATENT STRUCTURE CAN INFER UNOBSERVED LABELS
To test the capacity of the latent graph of our model we explore whether it can learn unobserved states directly from the sequence. One such example is inferring the directionality of genes. Genes can be encoded on both strands of the double-helical DNA but usually only one strand is sequenced. Genes that lie in the ”reading” direction of sequenced strand are designated as forward and genes that would be read on the opposite strand as reverse. We train our model on a directionless set of observed labels (see simple-labels-DA in Appendix A.1). Since the the training data does not provide any information about directionality, the model must learn this directly from the sequence to be able to distinguish between directions.
Figure 4.5 4 shows the comparison between our model trained on the directionless label set as well as a set including directional labels (see simple-direction-DA in Appendix A.1). Although there is a reduction in performance between the two training regiments, the model is still able to learn and infer direction. Surprisingly the reduced performance does not originate from confusion between forward and reverse labels but rather due to the model’s propensity to over predict non-coding labels (see A.5).
5 CONCLUSION
Here we present a novel approach, GeneDecoder, to gene prediction combining the advantages of graphical models with the flexibility and potential of deep learned embeddings.
We find that the Latent CRF gives us not only the ability to preserve the graph-encoding of prior knowledge, as in the state-of-the-art HMMs gene predictors, but also offers an advantage by allowing us to learn embeddings directly from the sequence. This advantage is not only visible in the performance, but also in the low effort required to construct data sets and train the model. The greatest advantage, however, might well be the great flexibility of the learned embeddings and the potential to improve them as the DNA modelling field advances.
Even now this seems evident from the preliminary studies performed here using embeddings from a DNA language model trained only on a single species, with very short input sequences. We expect that as the field progresses language models more specifically tailored to DNA will emerge, modelling longer range interactions and training across multiple species. Such results have already been demonstrated by the success of the Enformer model (Avsec et al., 2021), which demonstrated the importance of long range information as well as the ability to learn downstream tasks that were not trained on.
One potential limitation of our model is that the amount of non-coding DNA (in our cases the flanking regions) in the training data set can affect the performance of the model on data with significantly more non-coding DNA. This issue can be resolved in the training process by scaling the training to larger data sets containing longer non-coding regions. As this presents a technical challenge, along with a time and resource challenge, rather than a scientific one we leave it for future work. Our model provides the capacity to include external hints in the inference process. Either via sampling high probability pathways around fixed labels but also via modification of the hidden state probabilities. The latter method can even be used as a strategy during training, highlighting the flexibility of out model. Lastly, model per position certainty is directly provided model in the form of posterior marginal probabilities. We leave it for future exploration to rigorously benchmark this.
We also expect that language modelling will alleviate issues like this by modelling far better and more informative embeddings of the raw sequence. But we leave this for future work. In this regard we view the model presented here as an initial iteration that can only improve as DNA modelling does. We anticipate that future iterations of GeneDecoder will incorporate some form of self-supervised pre-training across various species which could vastly improve not only prediction tasks in species with rich gene annotation data but also in novel or understudied organisms.
6 CODE AVAILABILITY
Code will be available on github upon publication.
A APPENDIX
A.1 LABEL SETS
Generally five types of labels are used:
E : Exon/CDS (i.e. the coding part of a sequence)
I : Intron
D : Donor site (one of the two splice sites)
A : Acceptor site (one of the two splice sites)
NC : Non-coding or intergenic regions of the genome.
Subscript F and R (e.g. EF and EF ) are used to denote the forward and reverse direction of the gene. Numbered subscripts are used to denote codon structure (i.e. E1,F is the first exon nucleotide in a codon in the forward direction).
Simple-DA:
The simplest set of states used. Y = { E,D, I,A,NC } Simple-direction-DA:
Expands the set of states to include information about the direction/strand of the gene. Y = { EF , DF , IF , AF , ER, ER, IR, AR, NC } Codon-direction-DA:
Further expands the set states to include not only direction but also codon structure. Y = { E1,F , E2,F , E3,F , D1,F , D2,F , D3,F , I1,F , I2,F , I3,F , A1,F , A2,F , A3,F ,
E1,R, E2,R, E3,R, D1,R, D2,R, D3,R, I1,R, I2,R, I3,R, A1,R, A2,R, A3,R, NC }
A.2 GRAPHS
Graphs are represented as adjacency matrices and are named according to the label sets described in Appendix A.1. A grey cell signifies a forbidden transition while the weights of the green cells are learned by model.
A.3 EMISSIONS
A grey cell signifies a forbidden emission while the weights of the green cells are set to 1 (i.e. an allowed emissions).
A.4 GENEDECODER FEATURE MODEL ARCHITECHTURE
Feature Model
Where |X| is the number of channels in the input, and |H| is the size of the set of hidden states. In most cases X = {A,C,G, T,N}.
DilatedCNN(
Layer 1 : Conv1d(in channels = |X|, out channels = 50, kernel size = (9,), stride = 1, dilation = 1), ReLU
Layer 2 : Conv1d(in channels = 50, out channels = 50, kernel size = (9,), stride = 1, dilation = 2), ReLU
Layer 3 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 4), ReLU
Layer 4 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 8), ReLU
Layer 5 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 16), ReLU
Layer 6 : Conv1d(in channels = 50, out channels = 50, kernel size = 9,), stride = 1, dilation = 32), ReLU
)
LSTM(
LSTM layer : lstm(50, hidden layers = 100, bidirectional = True)
Output layer : linear(input size = 200, output size = |H|, bias = True, dropout(0.2)), ReLU
)
A.5 SUPPLEMENTARY FIGURES | 1. What is the main contribution of the paper regarding gene finding in eukaryotic genomes?
2. What are the strengths and weaknesses of the proposed approach compared to existing methods?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the evaluation and comparison of the proposed method with other state-of-the-art techniques?
5. How does the reviewer suggest improving the paper, particularly in providing a full solution for gene finding instead of a proof of principle? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This manuscript describes a straightforward application of standard deep learning techniques to the important problem of gene finding in eukaryotic genomes. It is actually quite surprising that something like this has not been done before, since this is a central problem in genomics that is still not well solved despite decades of work in the area. State-of-the-art methods have used HMMs for more than two decades. This paper provides convincing evidence that a deep neural network approach to this problem has promise.
Strengths And Weaknesses
The paper does a very good job of providing a concise and accessible description of the gene finding problem for an ICLR audience.
Not much novelty from a machine learning perspective.
The main missing piece in this paper, in my opinion, is an investigation into the types of errors that the system makes in comparison to other methods. I want to know whether the system is missing some exons entirely, just getting the splice sites wrong, hallucinating new genes that don't exist, etc.
The other missing piece is an evaluation of how well the system works in practice on an actual genome, rather than on snippets of a genome selected around a gene. As is, the paper is more like a proof of principle than a full solution.
Clarity, Quality, Novelty And Reproducibility
I think the paragraph beginning "The current state of the art" should include citations. E.g., the first sentence of this paragraph should include citations. And the first mention of software packages such as Glimmer should be accompanied by a citation.
In Section 4.1, there are some confusing statements about how the model is set up, where we are told, for example, that random flanks were added, but not precisely how these flank lengths were selected. The text implies that this was handled differently in different experiments. These choices need to be clearly described, for reproducibility. Similarly, why is the gene length capped at two different values, and which experiments are these caps used for?
I was confused by the organization of the results. I thought Section 4.2 was going to provide an initial comparison of the proposed method to the current state of the art. But that section simply lists the what those methods are, and then we skip immediately to the ablation experiments. I think the ablation experiments should come after the results that show that the method works well across many species. This seems to be what appears in Section 4.6.
Minor:
"dominated the field for more than a decade" -> "... more than two decades"
"HMM model" is redundant.
The Stanke and Wacke citation used \cite{} rather than \citep{}. This happens several other places as well.
"and highlighting" -> "and highlights"
"recommendation are" -> "recommendations are"
"identification ... are" -> "identification ... is"
Section 3.1 should clarify what the output labels are.
Throughout, there is no need to capitalize the name of a method, even if the associated abbreviation necessarily uses capitals. E.g. "Linear chain Conditional Random Field (CRF)" -> "linear chain conditional random field (CRF)"
"choice ... depend" -> "choice ... depends"
"takes a onehot encoded sequences" -> "takes a onehot encoded sequence"
"of the CRF:" -> "of the CRF is used:"
"state:" -> "state"
"Figure 3.1" -> "Figure 1"
"to label to unnormalised label" -> ??
"on the either across" -> ??
"one the hidden" -> "on the hidden"
"asses" -> "assess"
"inference set" -> "inference"
The description of annotation sources in Section 3.1 is not clear. Provide URLs and make clear what those citations refer to.
"A. Thaliana" -> "A. thaliana"
Section 4.3 should do a better job of explicitly using the labeling scheme from Table 1, so we know which line is being discussed at each step.
Section 4.4 mentions the name "GeneDecoder" without explaining that this is the name of the proposed method. The name is formally introduced later. I think you should state the name in the introduction.
"both strand" -> "both strands"
"genes that lay" -> "genes that lie"
"of sequenced" -> "of the sequenced"
"models" -> "model's"
"evolutionary" -> "evolutionarily"
"The benchmark limited and specific" -> ??
"model ," -> "model,"
"but it is still has" -> "but it still has"
"previous findings" -> What previous findings? This either requires a citation or a reference to some previous result in the current paper.
"Human" -> no italics, no capitalization
"models closely" -> "model closely"
"mebeddings" -> "embeddings" |
ICLR | Title
Contextual Image Masking Modeling via Synergized Contrasting without View Augmentation for Faster and Better Visual Pretraining
Abstract
We propose a new contextual masking image modeling (MIM) approach called contrasting-aided contextual MIM (ccMIM), under the MIM paradigm for visual pretraining. Specifically, we adopt importance sampling to select the masked patches with richer semantic information for reconstruction, instead of random sampling as done in previous MIM works. As such, the resulting patch reconstruction task from the remaining less semantic patches could be more difficult and helps to learn. To speed up the possibly slowed convergence due to our more difficult reconstruction task, we further propose a new contrastive loss that aligns the tokens of the vision transformer extracted from the selected masked patches and the remaining ones, respectively. The hope is that it serves as a regularizer for patch feature learning such that the image-level global information could be captured in both masked and unmasked patches, and notably such a single-view contrasting avoids the tedious image augmentation step required in recent efforts of introducing contrastive learning to MIM (to speedup convergence and discriminative ability). Meanwhile, the attention score from the contrastive global feature can also carry effective semantic clues to in turn guide our above masking patch selection scheme. In consequence, our contextual MIM and contrastive learning are synergetically performed in a loop (semantic patch selection-token alignment contrasting) to boost the best of the two worlds: fast convergence and strong performance on downstream tasks without ad-hoc augmentations, which are verified by empirical results on ImageNet-1K for both classification and dense vision tasks.
1 INTRODUCTION
Self-supervised learning (SSL) (Zbontar et al., 2021; Jin et al., 2022; Chen et al., 2020b) has been attracting increasing attention recently in deep learning, due to its label-free property and capability of learning rich holistic representations. Recent SSL methods mainly fall into two classes. Contrastive methods (He et al., 2020; Chen et al., 2020b; Zhang et al., 2021) construct multiple views of the given image to increase the variance and align the representations of these views in latent space. The pair-wise learning paradigm endows strong linear evaluation accuracy of contrastive methods. However, these methods tend to extract global information but ignore local information, which limits their performance on downstream dense vision tasks, e.g. detection, segmentation, etc. Besides, the two-view learning paradigm is usually sensitive to the choice of augmentation function (Chen et al., 2020c), batch size (Chen et al., 2020b; Zhang et al., 2021) and output dimension (Zbontar et al., 2021; Zhang et al., 2022) etc. With the success of recent ViT-based vision backbones, which divide images into several patches, Mask-Image-Modeling (MIM) methods (He et al., 2021; Fang et al., 2022; Chen et al., 2022) randomly mask some patches and use the self-attention mechanism to
∗Junchi Yan is the correspondence author. Rui Zhao is also with Qing Yuan Research Institute, Shanghai Jiao Tong University. This work was in part supported by NSFC (62222607), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102) and SenseTime Collaborative Research Grant.
recover pixel-wise information from the remaining un-masked patches. These MIM-based methods are shown can obtain a better pre-trained encoder even than contrastive approaches, and especially show strong performance on transfer learning and finetuning tasks thanks to the ability to capture local information. However, these methods can suffer from the limited discriminability for global representation with less competitive linear accuracy (Gao et al., 2022), slow convergence speed (1,600 epochs for MAE), and GPU-consuming (e.g. with batch size 4096).
Therefore, recent attempts (Wang et al., 2022; Chen et al., 2022; Yi et al., 2022) are devoted to combining contrastive learning and MIM. The hope is that both (global) discriminative information (by contrasting) and spatial-sensitive information (by MIM) can be captured. Although these approaches show improved convergence speed (in the sense of fewer epochs needed for convergence) compared with using MIM alone, they can hardly further boost the performance of MIM on downstream tasks given even more (e.g. 800, 1600) pretraining epochs. Meanwhile, such direct combing contrasting with MIM can also bring about side effects e.g. the increased sensitivity to data augmentation and more overhead at each training epoch. In this paper, we aim to improve both convergence speed and accuracy by devising a new contextual MIM scheme, which is synergistically aided by contrastive learning. The highlights of our proposed ccMIM are:
1) Novel framework for synergizing MIM and contrastive learning in a close-loop: We propose a novel contextual MIM framework by actively selecting the rich semantic patches as masking patches for reconstruction to improve the MIM learning difficulty, whereby the semantic clue is learned and measured by our devised vision transformer token alignment contrasting loss. As such, the synergizing of the two components can be fulfilled in the first place. In contrast, existing efforts on the combination of MIM and contrasting often perform these two components independently until their weighted loss is finally computed and show limited accuracy improvement (perhaps also partly due to their random patch selection strategy as observed in our ablation study in Sec. 4.3).
2) Cost-effective technical design for contextual MIM: Under the above framework, we propose to use importance sampling to contextually select the patches with richer semantic information for masking and reconstruction, to increase the learning difficulty and effectiveness. The selection is guided by the attentional semantic score derived from the [CLS] token alignment as contrastive learning between the selected and un-selected patches. Compared to the widely adopted two-view augmentation for contrasting in recent MIM works, our contrasting is efficiently performed within a single view (for speedup MIM training). Moreover, our way of using contrasting also directly guides the MIM learning instead of being two independent components as done in the existing literature.
3) Improvement over MAE: Using ViT-B (Dosovitskiy et al., 2020) as a standard protocol in MIM, ccMIM achieves 83.6%, 84.2% top-1 accuracy with 300 and 800 epochs pre-training, outperforming MAE (He et al., 2021) 0.8% (82.8% for 300 epochs) and 0.6% (83.6% for 1600 epochs) accuracy on ImageNet-1K, respectively. Source code will be made publicly available.
2 RELATED WORK
In our work, the proposed ccMIM divides image patches into two sets by information density by learning two objective functions. One is aligning the global representations of two sets and the other is reconstructing raw images from the visible set, so we briefly review contrastive learning (align representations of multiple augmented views in latent space) and the MIM-based methods (reconstruction).
Contrastive learning aims to learn instance discriminative representations to distinguish an image from the others (Hjelm et al., 2018). This is achieved by pulling together the representations of different views of the same image and pushing away the other images. Thus, most contrastive methods adopt siamese network (He et al., 2020; Grill et al., 2020; Chen & He, 2021). To create different views for the same image, well-designed data augmentation functions have been deployed (e.g., those investigated in SimCLR (Chen et al., 2020b)). To increase the number of negative pairs, Moco (He et al., 2020; Chen et al., 2020c) constructs a large queue to store negative representation in memory. Besides, Moco and BYOL use momentum and stop-gradient mechanisms are adopted to prevent degenerate solutions (Tian et al., 2021; Wang & Isola, 2020). To simplify BYOL, SimSiam (Chen & He, 2021) discards the momentum updating and finds stop-gradient and predictor head are the keys to preventing collapse. MoCo-v3 (Chen et al., 2021) and DINO (Caron et al., 2021) are based on the siamese network and extend MoCo (He et al., 2020) and BYOL (Grill et al., 2020) with Vision
Transformer (ViT) (Dosovitskiy et al., 2020) as their model backbones. Although contrastive learning methods provide discriminative features, most of them focus on learning global representations while lacking the spatial-sensitive and local representation that can be mitigated by the MIM techniques.
MIM-based methods (He et al., 2021; Xie et al., 2021) learn vision representation by reconstructing the masked patches from the partial observations. Based on the reconstruction objective, these methods can be divided into: pixel-wise reconstruction (He et al., 2021; Xie et al., 2021) and auxiliary features/tokens prediction (Dong et al., 2021; Chen et al., 2022). SimMIM (Xie et al., 2021) and MAE (He et al., 2021) are the first two methods applying mask modeling in the visual domain. They propose to reconstruct the raw pixel values from either the full set of image patches (mask tokens and visible patches for SimMIM) or partially observed patches (visible patches for MAE). Compared with SimMIM, MAE is more efficient because of dropping out a large portion of input patches. Inspired by adversarial training (Goodfellow et al., 2014), CIM (Fang et al., 2022) adds perturbations to raw images to enhance robustness for reconstruction. While GreenMIM (Huang et al., 2022) applies divide-and-conquer with hierarchical ViT backbones (Liu et al., 2021). Departure from predicting on RGB space, iBOT (Zhou et al., 2022) and AttMask (Kakogeorgiou et al., 2022) proposed to randomly and selectively (by semantic importance) mask patches and perform contrasts in latent space, respectively.
Combination of Contrasting and MIM has been recently explored with the hope to achieve both fast convergence and good accuracy. For instance, CAE (Chen et al., 2022) randomly partitions the image into two sets of patches and utilizes the visible set to reconstruct the masked patches. Then, an alignment objective is added to two sets in latent space. However, the reconstruction loss and alignment loss (as contrasting) are independent of each other and linearly combined for training, which we argue it lacks a synergistic effect between the two modules. Similar issues also exist in other attempts (Yi et al., 2022; Wang et al., 2022) with a linear combination of the two losses, although faster convergence is observed yet with little improvement on the accuracy of downstream tasks. In this paper, we aim to propose a more synergetic approach for contrasting-aided MIM.
3 METHODOLOGY
3.1 PRELIMINARIES
We denote the set of N number of images X ∈ RN×C×H×W and an encoder f for pretraining. 1) Contrastive Learning: The encoders are usually quantified as CNNs and ViTs to learn global features Z ∈ RN×D from raw images. The raw images are firstly transformed by certain augmentation to generate two views. The key step is using encoders to learn the invariance of two views, i.e. maximizing the representation similarity of the two views (alignment term in (Chen et al., 2020b; Wang & Isola, 2020)). However, directly adding the alignment part will cause degenerate solutions (Tian et al., 2021; Wang & Isola, 2020), which means all the images are encoded to the same and collapsed representation. Two techniques have been developed to avoid degenerate solutions, i.e., adopting stop-gradient to build asymmetric framework (Grill et al., 2020; He et al., 2020; Chen et al., 2020c; Caron et al., 2021; Zhou et al., 2022) and using negative samples (Chen et al., 2020b; He et al., 2020; Chen et al., 2020c), which will be used in the two versions of our method, which are respectively based on: the contrastive forms of SimSiam (Chen & He, 2021) and SimCLR (Chen et al., 2020b).
i) Adopting the asymmetric framework: the objective of SimSiam can be formulated as:
Lsimsiam = 2− sim(StopGrad(HA), p(HB))− sim(StopGrad(HB), p(HA)) (1) where StopGrad(·) means stop-gradient operations and sim(·, ·) is the similarity metric function (cosine similarity in this paper). p(·) is the predictor head (two layers MLP), which is commonly used in previous asymmetric methods (Chen & He, 2021; Grill et al., 2020). HA and HB denotes the representations of view A and view B in contrastive space, which can be obtained by H = g(Z) and g is the projector head composed of two-layer MLP (LeCun et al., 2015).
ii) Adopting the negative samples: accordingly SimCLR (Chen et al., 2020b) adopts the negativebased objective: (InfoNCE (Hjelm et al., 2018)):
Lsimclr = −EA,B [ log
esim(h A i ,h B i )/τ∑N
j=1,j ̸=i e sim(hAi ,h A j )/τ + ∑N j=1 e sim(hAi ,h B j )/τ
] (2)
Method 50 epochs (linear) 100 epochs (linear) 100 epochs (fine tune)
Top-1 Top-5 Top-1 Top-5 Top-1 Top-5
MAE 33.656 57.292 33.69 57.292 79.432 94.396
IMAE 30.852 54.104 31.300 54.452 79.024 93.237
ccMIM 30.02 53.384 41.01 65.896 80.498 94.839
where hAi (h B i ) means the representation in contrastive space of view A (B) of image i.
2) Masking Image Modeling: the common embodiment of encoders are transformer-based models, e.g. vallina ViT (Dosovitskiy et al., 2020) and Swin Transformer (Liu et al., 2021). Current MIMbased methods (He et al., 2021; Chen et al., 2022; Xie et al., 2021) basically split raw images into several patches, and randomly select patches to mask. The remaining patches are used for reconstructing the masked patches x̂M , and the pixel-wise objective is added on masked prediction:
Lrecons = 1
Ω(x̂M ) ∥x̂M − T (xM )∥p (3)
where Ω(x̂M ) is the number of masked patches and p means the p-norm (p = 2 in this paper). T is a transformation function (for MAE, T is the identity function; for MaskFeat (Wei et al., 2021), T is the HOG transformation). In this paper, we mainly follow the settings in MAE (He et al., 2021).
3.2 THE PROPOSED CCMIM: CONTRASTING-AIDED CONTEXTUAL MIM
In our proposed ccMIM, the patches are divided into two sets: masked set (composed of patches with richer contextual information) and visible set (with less contextual information, e.g. background). The visible set is used for reconstruction, and the contrastive objective is regularized on the global tokens of the two sets to help divide patches by contextual information. As illustrated in Fig. 1, ccMIM consists of five components:
• Contextual Patch Modeling. Given an input image, ccMIM first divides the patches into visible and masked sets by contextual information. Patches in the masked set will have richer contextual information to improve the difficulty of MIM. Both two sets will be fed to the backbone.
• Encoder architecture. Encoder backbone extracts a latent feature representation for the masked image, which is then used to predict the original signals at the masked area. The learned encoder is expected to be transferable to various vision tasks. In this paper, following MAE (He et al., 2021), we mainly consider ViT (Dosovitskiy et al., 2020) as backbones.
• Projector head. The projector head is applied in the contrastive branch, which forces the encoder to learn the global invariance of two sets (masked area and visible area). Specifically, we perform unidirectional alignment (Hjelm et al., 2018; Chen & He, 2021) on two sets (i.e., align visible set with masked set), since the masked set has richer contextual information.
• Decoder. The decoder will be applied to the latent feature representation of the visible set (with less contextual information) to produce one form of the original signals of the masked areas.
• Prediction target. This component defines the form of original signals to predict. It can be either the raw pixel values or a transformation of the raw pixels (HOG features).
In the following, we elaborate on the details of our approach.
Selecting and dividing strategy. We denote the image as X ∈ RC×H×W . Under the ViTbased (Dosovitskiy et al., 2020) MIM paradigm, we first split each image to several patches X ∈ Rp 2×C×Hp × W p . Then, each patch will be mapped to one vector through a linear layer W ∈ RC H p W p ×D, where D is the hidden dimension. Inspired by the transformer (Vaswani et al., 2017) in NLP, ViT adds one global token to extract global information of each image. Denote the i-th patch as xi and let x0 be global token. Then, we can calculate the importance of the patches xi by:
s = Softmax ( Eh [ q0K ⊤ 1:√
D
]) (4)
where Eh means taking the average attention value of multiple heads (Vaswani et al., 2017). q0 means the query vector of the global token, which can be calculated by q0 = xoWQ and WQ refers to the learnable weights. K1: is the key matrix (Vaswani et al., 2017) excluding the first row (global token), which is calculated by K = XWK and WK is the learnable weights. After obtaining the importance of the patches, we can selectively divide the patches into two sets, i.e., masked set M and visible set V , where M mainly includes the patches with larger importance and V is composed of the remaining patches. The main rationale behind this design is that some background patches (less attention value) may still have a little contextual information, which may be helpful for learning contextual and fine-grained information for downstream tasks (e.g., fine-grained classification, detection). Denote the probability density function of the importance is f(x) and let p(x) be a random distribution. Note π(x) is the distribution of function f on p. Then, for the expectation of f(x) on π, we have:
E [f ] = ∫ x π(x)f(x)dx = ∫ x p(x) π(x) p(x) f(x)dx (5)
where π(x)p(x) is the importance sampling weights. For the new distribution π(x) p(x) f(x), we have:
V arx∼p
[ f(x) π(x)
p(x)
] = Ex∼p [( f(x) π(x)
p(x)
)2] − ( Ex∼p [ f(x) π(x)
p(x) ])2 = Ex∼π [ f(x)2 π(x)
p(x)
] − (Ex∼π [f(x)])2
(6)
where π(x)p(x) can be obtained by the observed importance values (calculated by self-attention) and random generator. Then, we can divide the patches into M and V sets by importance sampling. Visible and masked patches alignment. For the importance masking strategy, one of the assumptions is the global token [CLS] can learn the global information of the whole image. However, for the previous methods like MAE (He et al., 2021), there’s no regularization added on the [CLS] token. Hence, directly using [CLS] to partition patches may still be random. To address this problem, we add another objective on the [CLS] token, i.e. contrastive loss (InfoNCE (Chen et al., 2021) or l2 regularization (Chen & He, 2021)). Denote the global representations ([CLS] token) in contrastive space of masked set and visible set as HM and HV , respectively. Then, the NCE-liked objective is:
LNCE = −EV [ log
esim(SG(h M i ),h V i )/τ∑N
j=1,j ̸=i e sim(hVi ,h V j )/τ + ∑N j=1 e sim(SG(hMi ),h V j )/τ
] (7)
where SG(·) is the stop gradient operation. Compared with SimCLR-based objective, Eq. 7 only contrasts the visible branch. This design is because the masked set mainly includes the important patches, which have richer semantic information than the visible set. Hence, hM can be regarded as the teacher to guide the hV (student). We can also only add the alignment term on the [CLS] token like SimSiam (Chen & He, 2021) and BYOL (Grill et al., 2020), which can be written as:
Lalign = E [ 1− sim(StopGrad(hMi ), p(hVi )) ] (8)
where sim(x,y) = x ⊤y
∥x∥2∥y∥2 in this paper and p(·) is the projector head (Chen & He, 2021).
Image reconstruction and contrastive learning. We adopt a standard The masked patch reconstruction protocol in ccMIM is similar to MAE, by predicting the pixel values for patches in the masked set. Each element in the decoder’s output is a vector of pixel values representing a patch. The
last layer of the decoder is a prediction target W ∈ RD× H p · W p . Then, in line with MAE, the mean squared error (MSE) is computed between the reconstructed and original patches in the pixel space:
Lrecons = 1
Ω(M) ∥x̂M − xM∥2 (9)
where Ω(M) is the number of elements in the masked set M. The overall objective of ccMIM is: LccMIM = {
Lrecons + λ · Lalign, for using asymmetric framework Lrecons + λ · LNCE , for using negative samples (10)
where λ is the hyper-parameter to balance contrastive loss and reconstruction loss. Note that only using either of the contrastive loss (Eq. 7 and Eq. 8) is enough for regularizing the [CLS] token. We mainly use LNCE by default in our main experiments which we find can be even more stable. The comparison of the two objectives is given in the ablation study.
Remark. Note that though we also adopt the linear loss as in Eq. 10, while the reconstruction procedure is aided by the contrasting in a loop, which makes our method differs from other combination methods (Chen et al., 2022; Yi et al., 2022; Wang et al., 2022) without such synergetic connections.
3.3 METHODOLOGY COMPARISON
Note in Table 6 in Appendix, we compare recent SSL pretraining methods. In particular, here we elaborate on the connections to the most related methods CAE, MST, Repre, and SemMAE.
Relation to CAE (Chen et al., 2022). Both CAE and ccMIM partition the raw image into two sets, i.e., visible set and mask set. However, CAE divides them randomly, which is similar to MAE (He et al., 2021), limiting their performance. ccMIM partitions the two sets by importance sampling. Moreover, ccMIM adds another contrastive loss to regularize [CLS] token to help partition.
Relation to MST (Li et al., 2021). Both MST and ccMIM use contrastive loss and reconstruction pretext task and selective strategy. However, the motivations of MST and ccMIM are completely different. MST mainly relies on the contrastive loss to learn the invariance of two views and reconstruction loss only works a little bit. Besides, they mask the unimportant patches, which only slightly influences their performance (Note that the random mask strategy will cause about 10% Top-1 accuracy drop). In contrast, ccMIM masks the important patches to enhance the difficulty of reconstruction. Besides, ccMIM does not require multiple views from tricky augmentation.
Relation to Repre (Wang et al., 2022). Repre is the other method using both contrastive loss and reconstruction loss. However, Repre also requires two views from tricky augmentation and the main gain is coming from the contrastive objectives. Besides, the reconstruction branch is similar to VAE (Kingma & Welling, 2013), which takes all patches as input without any masking strategy.
Relation to SemMAE (Li et al., 2022). SemMAE is another concurrent work about attentive masking. Both ccMIM and SemMAE mask patches with richer contextual information to increase the difficulty of reconstruction. SemMAE uses backbone pretrained by iBOT (Zhou et al., 2022) to help to select contextual regions, where iBOT takes 1600 extra epochs for pretraining, which is computation-cost. In contrast, ccMIM directly adds alignment loss on [CLS] token to learn global information to select masked regions, which does not require extra information and time.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Dataset. We perform self-supervised pre-training on the ImageNet-1K (Deng et al., 2009) training set, as commonly used in SSL (He et al., 2021; Zhang et al., 2021). It includes 1k classes which are well-balanced in distribution and the images contain an iconic view of objects. We evaluate our pretrained model on the ImageNet-1k validation set, as well as on MS-COCO (Lin et al., 2014) object detection and segmentation, ADE20K (Zhou et al., 2017) segmentation, and other small-scale classification datasets (Van Horn et al., 2018).
Backbone. We use the standard ViTs architecture (Dosovitskiy et al., 2020). It has several blocks composed of multi-head self-attention (MHSA) (Vaswani et al., 2017), MLP, and LayerNorm (Ba et al., 2016). ccMIM adds one [CLS] token at both masked set and visible set (shared). The decoder
of ccMIM follows the settings of MAE (He et al., 2021), which includes two-layer MHSA blocks. At the evaluation stage, the average embedding of patches is used for finetuning and linear probing.
Evaluation protocols. i) Linear probing is widely used as a proxy of pretraining quality evaluation for SSL (Chen et al., 2020b; He et al., 2021; Chen et al., 2022). It learns a linear classifier over the image-level representation output from the pretrained encoder by using the labels of the images, and then tests the performance on the validation set. ii) Fine tuning is often used to evaluate the backbone in reconstructed-based methods (He et al., 2021; Chen et al., 2022). We use the same hyper-parameter with MAE, i.e., 100 epochs with learning rate 1e-3 and 1024 batch size. iii) Downstream tasks across different datasets and tasks (detection, segmentation, fine-grained classification) are also used to evaluate the transferability of the pre-trained encoder.
4.2 MAIN RESULTS
Linear probing and finetune on ImageNet-1K. Table 1 shows the linear probing and fine-tuning accuracy on ImageNet-1k dataset. We mainly compare our method with transformer-based contrastive learning methods (Caron et al., 2021; Chen et al., 2021) and MIM-based (He et al., 2021; Xie et al., 2021) methods. For 300 epochs pretraining, our ccMIM outperforms baseline MAE +0.8% top-1 accuracy. For 800 epochs pretraining, ccMIM achieves 84.2% top-1 accuracy, outperforms MAE +0.6% accuracy with 1600 epochs pretraining. For the concurrent method ConMIM, it uses two views by non-trivial augmentation in the pre-training step, i.e., multiple views are required to enable contrastive objectives. Although ccMIM also has the contrastive objective, it only requires a single view. Besides, ConMIM gets 83.7% accuracy with 800 epochs pretraining, only improves 0.2% accuracy on the basis of 300 epochs pretraining. The reason may be that the contrastive objective in ConMIM only improves the convergence speed but not the final performance. In contrast, ccMIM gets 84.2% (0.6% higher than MAE and ccMIM with 300 epochs pretraining) top-1 accuracy in 800 epochs pretraining, which may be because the contrastive objective in ccMIM is to help select contextual patches and the final gain is coming from recovering patches with richer contextual information (ConMIM randomly masks patches with less contextual information). Besides, we also find ccMIM surpasses MAE +4.5% and +2.1% top-1 accuracy under linear probing protocol. We guess that’s because ccMIM adds the global-discriminative objective (see ablation study in Sec. 4.3).
Object detection and segmentation on MS-COCO. Setups. Following CAE (Chen et al., 2022), we adopt Mask R-CNN (He et al., 2017) that produces bounding boxes and instance masks simultaneously, with the ViT as the backbone (See the supplementary file for training details). We apply the same object detection system to the methods. We report the box AP for object detection and the mask AP for instance segmentation. We utilize multi-scale training and resize the image with the size of the short side between 480 and 800 and the longer side no larger than 1,333. The batch size is 32. For ViT-S, the learning rate is 3e-4, the layer-wise decay rate is 0.75, and the drop path rate is 0.1. For ViT-B, the learning rate is 3e-4, the layer-wise decay rate is 0.75, and the drop path
Semantic segmentation on ADE20K. Setups. We evaluate semantic segmentation performances of the pre-trained models on ADE20K (Zhou et al., 2017), which includes 150 fine-grained semantic categories and 25k training data. In line with iBOT (Zhou et al., 2022), we report the results by three metrics: mean intersection of union (mIoU) averaged over all semantic categories, all pixel accuracy (aAcc), and mean class accuracy (mAcc). We pretrain ccMIM with 800 epochs and finetune it in FPN (Lin et al., 2017) with batch size 16. For ViT-B, the layer-wise decay rate is 0.65, and the drop path rate is 0.1. For other methods, we download their pretrained models and finetune them in our setting for fair comparisons and Table 3 shows the finetune results. The proposed ccMIM gets state-of-the-art accuracy among all the contrastive-based and MIM-based methods. Specifically, for 800 epochs pretraining, ccMIM outperforms MAE and CAE +1.7% and +0.5% mIoU, respectively.
Transfer classification on other datasets. In line with previous SSL methods, we conduct a set of experiments to evaluate the transfer accuracy on fine-grained classification datasets (iNaturalist (Van Horn et al., 2018) and Flowers). We mainly compare our method with ClusterFit (Yan et al., 2020), SNCA+ (Wu et al., 2018), Grafit (Touvron et al., 2021b), iBOT (Zhou et al., 2022), DeiT (Touvron et al., 2021a) and MAE (He et al., 2021). We finetune the pretrained encoder on iNaturalist datasets in 360 epochs with the learning rate 3e-4. For all SSL methods, we choose ViT-B/16 for comparisons. The results in Table 4 show that ccMIM obtains the highest accuracy compared with previous supervised and self-supervised methods. Specifically, ccMIM gets 0.5%, 0.2% and 0.2% higher accuracy than MAE on iNaturalist2019, iNaturalist2018 and Flowers-102, respectively.
4.3 ABLATION STUDY
λ > 0.2, with the increase of λ, the accuracy drops. When
λ > 0.8, ccMIM with LNCE gets lower accuracy than MAE, which we think this phenomenon is because the gain of ccMIM is from contextual masking, and contrastive loss is only used for selection semantic-richer patches, but not for direct improvement. When λ = 0, there’s no regularization on [CLS] token, leading the lower accuracy than λ = 0.1 and λ = 0.2.
Dividing strategy 100 epochs 400 epochsLinear Finetune Linear Finetune random 33.6 79.4 63.1 82.9
block-wise 32.9 79.2 62.0 82.8 importance 31.1 78.8 66.8 83.6 importance sampling 41.1 80.5 67.4 83.8
each time. For each block, we set the minimum number of patches to 16. Then we randomly choose an aspect ratio for the masking block. We repeat the above two steps until obtaining enough masked patches. We distributed pretrain ccMIM on 32 GPUs in 100 and 400 epochs with 1024 batch size. We report linear and finetune accuracy in Table 5. In our experiments, the block-wise dividing strategy gets a bit lower accuracy than random dividing, which is in line with previous methods MAE (He et al., 2021) and SimMIM (Xie et al., 2021). We also find with 100 epochs pretraining, the importance-based dividing strategy gets much lower accuracy than random and block-wise methods. However, for 400 epochs pretraining, importance-based methods get higher accuracy than random and block-wise methods. We think that’s because the importance-based strategy will slow down the convergence speed (patches with richer contextual information are more difficult to reconstruct). Importance-sampling-based method outperforms other dividing methods in both 100 and 400 epochs pretraining. We guess that’s because, for the first few epochs, the variance of attention value of different patches is small. Thus, when adding the random sampling operation, ccMIM also divides some patches with less contextual information in the masked set. Then, the reconstruction task is not too difficult, leading to a faster convergence speed than the importance-based strategy. Then, for 400 epochs pretraining, [CLS] token could easily divide patches with richer contextual information into the masked set, leading to the higher accuracy (+4.3%) than random and block-wise methods.
5 CONCLUSION
We have presented a novel contextual MIM framework dubbed ccMIM, whereby masked patch selection is aided by a contrast between the masked patches and unmasked ones. The reconstruction difficulty is increased as the masked patches are more semantically rich than random sampling as done in peer MIM methods. The resulting combined approach shows notable performance gain in both convergence speed and accuracy compared to baselines on public benchmarks across different tasks from classification to dense tasks including detection and segmentation.
A METHODOLOGY COMPARISON
To better position our approach, we list recent contrastive and mask image modeling methods in Tab. 6.
B MASK ILLUSTRATION
For better understanding of our work, we illustrate how ccMIM divides the patches into visible and masked set in Fig. 3(a). We visualize four pretrained models, i.e., ccMIM with iteration 0, 1, 800 and importance-based dividing in Sec. 4.3. When Iter = 0, the [CLS] token is randomly initialized, ccMIM randomly divides the patches into masked set and visible set. After 800 epochs pretraining, ccMIM could divide patches with richer contextual information into mask set, and the remain patches with less contextual information are divided into visible set to increase the difficulty of reconstruction. We further illustrate reconstruction loss curve of MAE and ccMIM in first 50 epochs pretraining in Fig. 3(b). We can find the loss values of ccMIM is higher than MAE, which demonstrates the larger difficulty for reconstruction of ccMIM than MAE.
C IMPLEMENTATION DETALS
Pre-training hyper-parameters. In line with MAE, we use AdamW as optimizer with the base learning rate 1.5e-4 and weight decay 0.05. We set the balance ratio λ = 0.2 and use LNCE in Eq. 7 as regularization loss as default. For computational tractability, we set batch size as 1024, distributed pretraining on 32 NVIDIA 1080Ti GPUs. Similar to previous methods in SSL (Xie et al., 2021), we adopt cosine lr scheduler and only use the random resized crop for augmentation. | 1. What is the focus and contribution of the paper on self-supervised representation learning?
2. What are the strengths of the proposed approach, particularly in combining contrastive learning and mask autoencoder learning?
3. What are the weaknesses of the paper regarding experimental ablation studies and providing detailed information on the CLS token?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper introduces self-supervised representation learning, which combines two well-known existing methods (i.e., contrast learning and mask autoencoder learning) in a way that appropriately obtains their respective advantages. Contrastive learning with Simese networks is faster in optimization compared to mask auto-encoder training, but is relatively poor at extracting spatially local information. On the other hand, masked encoder learning has performance advantages by making the learned representation have the ability to extract spatially local information, but the optimization is very slow. Since the main structural difference between the two existing methods is the preparation of the input, the proposed method uses visible patches and mask patches as two separate sets of patches required for contrastive learning. This method also proposed a novel mask region separation strategy that makes learning more difficult to obtain stronger representations. This strategy uses importance sampling using weights defined similarly to the attention weights for each value in ViT's self-attention module while input is the additional global token instead of local patches. Experiments have demonstrated that the learned representation through the proposed method (called ccSSL) was effective in terms of classification ability in linear probing task and finetuning task, and transferring ability in multiple downstream tasks.
Strengths And Weaknesses
Strength
The advantages and disadvantages of the two existing representation learning methods are well described and appropriately utilized in the method design.
The mask region separation strategy is novel and appears to be as effective as in tab 5. (The ablation experiments shown in tab 5 are well designed to support the use of the strategy.)
The learned representation through the proposed method was effective in all the evaluation tasks which are widely used in verifying the effectiveness of the representation learning.
Weaknesses
Some informative ablation studies may be necessary. E.g., the effect of various masking rate.
Detailed information on how the [CLS] token is constructed is lacked or insufficient.
It is necessary to experimentally confirm the advantages and disadvantages of the two existing representation learning methods and whether the proposed method has acquired each of these advantages well.
Clarity, Quality, Novelty And Reproducibility
Quality and novelty are sufficient to meet the ICLR standards.
It is difficult to pinpoint exactly why the method performs well because there are not a few ablation studies needed for the clarity of the methodology.
As the explanation of constructing [CLS] token, it is not straightforward to reproduce this method with this manuscript only. |
ICLR | Title
Contextual Image Masking Modeling via Synergized Contrasting without View Augmentation for Faster and Better Visual Pretraining
Abstract
We propose a new contextual masking image modeling (MIM) approach called contrasting-aided contextual MIM (ccMIM), under the MIM paradigm for visual pretraining. Specifically, we adopt importance sampling to select the masked patches with richer semantic information for reconstruction, instead of random sampling as done in previous MIM works. As such, the resulting patch reconstruction task from the remaining less semantic patches could be more difficult and helps to learn. To speed up the possibly slowed convergence due to our more difficult reconstruction task, we further propose a new contrastive loss that aligns the tokens of the vision transformer extracted from the selected masked patches and the remaining ones, respectively. The hope is that it serves as a regularizer for patch feature learning such that the image-level global information could be captured in both masked and unmasked patches, and notably such a single-view contrasting avoids the tedious image augmentation step required in recent efforts of introducing contrastive learning to MIM (to speedup convergence and discriminative ability). Meanwhile, the attention score from the contrastive global feature can also carry effective semantic clues to in turn guide our above masking patch selection scheme. In consequence, our contextual MIM and contrastive learning are synergetically performed in a loop (semantic patch selection-token alignment contrasting) to boost the best of the two worlds: fast convergence and strong performance on downstream tasks without ad-hoc augmentations, which are verified by empirical results on ImageNet-1K for both classification and dense vision tasks.
1 INTRODUCTION
Self-supervised learning (SSL) (Zbontar et al., 2021; Jin et al., 2022; Chen et al., 2020b) has been attracting increasing attention recently in deep learning, due to its label-free property and capability of learning rich holistic representations. Recent SSL methods mainly fall into two classes. Contrastive methods (He et al., 2020; Chen et al., 2020b; Zhang et al., 2021) construct multiple views of the given image to increase the variance and align the representations of these views in latent space. The pair-wise learning paradigm endows strong linear evaluation accuracy of contrastive methods. However, these methods tend to extract global information but ignore local information, which limits their performance on downstream dense vision tasks, e.g. detection, segmentation, etc. Besides, the two-view learning paradigm is usually sensitive to the choice of augmentation function (Chen et al., 2020c), batch size (Chen et al., 2020b; Zhang et al., 2021) and output dimension (Zbontar et al., 2021; Zhang et al., 2022) etc. With the success of recent ViT-based vision backbones, which divide images into several patches, Mask-Image-Modeling (MIM) methods (He et al., 2021; Fang et al., 2022; Chen et al., 2022) randomly mask some patches and use the self-attention mechanism to
∗Junchi Yan is the correspondence author. Rui Zhao is also with Qing Yuan Research Institute, Shanghai Jiao Tong University. This work was in part supported by NSFC (62222607), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102) and SenseTime Collaborative Research Grant.
recover pixel-wise information from the remaining un-masked patches. These MIM-based methods are shown can obtain a better pre-trained encoder even than contrastive approaches, and especially show strong performance on transfer learning and finetuning tasks thanks to the ability to capture local information. However, these methods can suffer from the limited discriminability for global representation with less competitive linear accuracy (Gao et al., 2022), slow convergence speed (1,600 epochs for MAE), and GPU-consuming (e.g. with batch size 4096).
Therefore, recent attempts (Wang et al., 2022; Chen et al., 2022; Yi et al., 2022) are devoted to combining contrastive learning and MIM. The hope is that both (global) discriminative information (by contrasting) and spatial-sensitive information (by MIM) can be captured. Although these approaches show improved convergence speed (in the sense of fewer epochs needed for convergence) compared with using MIM alone, they can hardly further boost the performance of MIM on downstream tasks given even more (e.g. 800, 1600) pretraining epochs. Meanwhile, such direct combing contrasting with MIM can also bring about side effects e.g. the increased sensitivity to data augmentation and more overhead at each training epoch. In this paper, we aim to improve both convergence speed and accuracy by devising a new contextual MIM scheme, which is synergistically aided by contrastive learning. The highlights of our proposed ccMIM are:
1) Novel framework for synergizing MIM and contrastive learning in a close-loop: We propose a novel contextual MIM framework by actively selecting the rich semantic patches as masking patches for reconstruction to improve the MIM learning difficulty, whereby the semantic clue is learned and measured by our devised vision transformer token alignment contrasting loss. As such, the synergizing of the two components can be fulfilled in the first place. In contrast, existing efforts on the combination of MIM and contrasting often perform these two components independently until their weighted loss is finally computed and show limited accuracy improvement (perhaps also partly due to their random patch selection strategy as observed in our ablation study in Sec. 4.3).
2) Cost-effective technical design for contextual MIM: Under the above framework, we propose to use importance sampling to contextually select the patches with richer semantic information for masking and reconstruction, to increase the learning difficulty and effectiveness. The selection is guided by the attentional semantic score derived from the [CLS] token alignment as contrastive learning between the selected and un-selected patches. Compared to the widely adopted two-view augmentation for contrasting in recent MIM works, our contrasting is efficiently performed within a single view (for speedup MIM training). Moreover, our way of using contrasting also directly guides the MIM learning instead of being two independent components as done in the existing literature.
3) Improvement over MAE: Using ViT-B (Dosovitskiy et al., 2020) as a standard protocol in MIM, ccMIM achieves 83.6%, 84.2% top-1 accuracy with 300 and 800 epochs pre-training, outperforming MAE (He et al., 2021) 0.8% (82.8% for 300 epochs) and 0.6% (83.6% for 1600 epochs) accuracy on ImageNet-1K, respectively. Source code will be made publicly available.
2 RELATED WORK
In our work, the proposed ccMIM divides image patches into two sets by information density by learning two objective functions. One is aligning the global representations of two sets and the other is reconstructing raw images from the visible set, so we briefly review contrastive learning (align representations of multiple augmented views in latent space) and the MIM-based methods (reconstruction).
Contrastive learning aims to learn instance discriminative representations to distinguish an image from the others (Hjelm et al., 2018). This is achieved by pulling together the representations of different views of the same image and pushing away the other images. Thus, most contrastive methods adopt siamese network (He et al., 2020; Grill et al., 2020; Chen & He, 2021). To create different views for the same image, well-designed data augmentation functions have been deployed (e.g., those investigated in SimCLR (Chen et al., 2020b)). To increase the number of negative pairs, Moco (He et al., 2020; Chen et al., 2020c) constructs a large queue to store negative representation in memory. Besides, Moco and BYOL use momentum and stop-gradient mechanisms are adopted to prevent degenerate solutions (Tian et al., 2021; Wang & Isola, 2020). To simplify BYOL, SimSiam (Chen & He, 2021) discards the momentum updating and finds stop-gradient and predictor head are the keys to preventing collapse. MoCo-v3 (Chen et al., 2021) and DINO (Caron et al., 2021) are based on the siamese network and extend MoCo (He et al., 2020) and BYOL (Grill et al., 2020) with Vision
Transformer (ViT) (Dosovitskiy et al., 2020) as their model backbones. Although contrastive learning methods provide discriminative features, most of them focus on learning global representations while lacking the spatial-sensitive and local representation that can be mitigated by the MIM techniques.
MIM-based methods (He et al., 2021; Xie et al., 2021) learn vision representation by reconstructing the masked patches from the partial observations. Based on the reconstruction objective, these methods can be divided into: pixel-wise reconstruction (He et al., 2021; Xie et al., 2021) and auxiliary features/tokens prediction (Dong et al., 2021; Chen et al., 2022). SimMIM (Xie et al., 2021) and MAE (He et al., 2021) are the first two methods applying mask modeling in the visual domain. They propose to reconstruct the raw pixel values from either the full set of image patches (mask tokens and visible patches for SimMIM) or partially observed patches (visible patches for MAE). Compared with SimMIM, MAE is more efficient because of dropping out a large portion of input patches. Inspired by adversarial training (Goodfellow et al., 2014), CIM (Fang et al., 2022) adds perturbations to raw images to enhance robustness for reconstruction. While GreenMIM (Huang et al., 2022) applies divide-and-conquer with hierarchical ViT backbones (Liu et al., 2021). Departure from predicting on RGB space, iBOT (Zhou et al., 2022) and AttMask (Kakogeorgiou et al., 2022) proposed to randomly and selectively (by semantic importance) mask patches and perform contrasts in latent space, respectively.
Combination of Contrasting and MIM has been recently explored with the hope to achieve both fast convergence and good accuracy. For instance, CAE (Chen et al., 2022) randomly partitions the image into two sets of patches and utilizes the visible set to reconstruct the masked patches. Then, an alignment objective is added to two sets in latent space. However, the reconstruction loss and alignment loss (as contrasting) are independent of each other and linearly combined for training, which we argue it lacks a synergistic effect between the two modules. Similar issues also exist in other attempts (Yi et al., 2022; Wang et al., 2022) with a linear combination of the two losses, although faster convergence is observed yet with little improvement on the accuracy of downstream tasks. In this paper, we aim to propose a more synergetic approach for contrasting-aided MIM.
3 METHODOLOGY
3.1 PRELIMINARIES
We denote the set of N number of images X ∈ RN×C×H×W and an encoder f for pretraining. 1) Contrastive Learning: The encoders are usually quantified as CNNs and ViTs to learn global features Z ∈ RN×D from raw images. The raw images are firstly transformed by certain augmentation to generate two views. The key step is using encoders to learn the invariance of two views, i.e. maximizing the representation similarity of the two views (alignment term in (Chen et al., 2020b; Wang & Isola, 2020)). However, directly adding the alignment part will cause degenerate solutions (Tian et al., 2021; Wang & Isola, 2020), which means all the images are encoded to the same and collapsed representation. Two techniques have been developed to avoid degenerate solutions, i.e., adopting stop-gradient to build asymmetric framework (Grill et al., 2020; He et al., 2020; Chen et al., 2020c; Caron et al., 2021; Zhou et al., 2022) and using negative samples (Chen et al., 2020b; He et al., 2020; Chen et al., 2020c), which will be used in the two versions of our method, which are respectively based on: the contrastive forms of SimSiam (Chen & He, 2021) and SimCLR (Chen et al., 2020b).
i) Adopting the asymmetric framework: the objective of SimSiam can be formulated as:
Lsimsiam = 2− sim(StopGrad(HA), p(HB))− sim(StopGrad(HB), p(HA)) (1) where StopGrad(·) means stop-gradient operations and sim(·, ·) is the similarity metric function (cosine similarity in this paper). p(·) is the predictor head (two layers MLP), which is commonly used in previous asymmetric methods (Chen & He, 2021; Grill et al., 2020). HA and HB denotes the representations of view A and view B in contrastive space, which can be obtained by H = g(Z) and g is the projector head composed of two-layer MLP (LeCun et al., 2015).
ii) Adopting the negative samples: accordingly SimCLR (Chen et al., 2020b) adopts the negativebased objective: (InfoNCE (Hjelm et al., 2018)):
Lsimclr = −EA,B [ log
esim(h A i ,h B i )/τ∑N
j=1,j ̸=i e sim(hAi ,h A j )/τ + ∑N j=1 e sim(hAi ,h B j )/τ
] (2)
Method 50 epochs (linear) 100 epochs (linear) 100 epochs (fine tune)
Top-1 Top-5 Top-1 Top-5 Top-1 Top-5
MAE 33.656 57.292 33.69 57.292 79.432 94.396
IMAE 30.852 54.104 31.300 54.452 79.024 93.237
ccMIM 30.02 53.384 41.01 65.896 80.498 94.839
where hAi (h B i ) means the representation in contrastive space of view A (B) of image i.
2) Masking Image Modeling: the common embodiment of encoders are transformer-based models, e.g. vallina ViT (Dosovitskiy et al., 2020) and Swin Transformer (Liu et al., 2021). Current MIMbased methods (He et al., 2021; Chen et al., 2022; Xie et al., 2021) basically split raw images into several patches, and randomly select patches to mask. The remaining patches are used for reconstructing the masked patches x̂M , and the pixel-wise objective is added on masked prediction:
Lrecons = 1
Ω(x̂M ) ∥x̂M − T (xM )∥p (3)
where Ω(x̂M ) is the number of masked patches and p means the p-norm (p = 2 in this paper). T is a transformation function (for MAE, T is the identity function; for MaskFeat (Wei et al., 2021), T is the HOG transformation). In this paper, we mainly follow the settings in MAE (He et al., 2021).
3.2 THE PROPOSED CCMIM: CONTRASTING-AIDED CONTEXTUAL MIM
In our proposed ccMIM, the patches are divided into two sets: masked set (composed of patches with richer contextual information) and visible set (with less contextual information, e.g. background). The visible set is used for reconstruction, and the contrastive objective is regularized on the global tokens of the two sets to help divide patches by contextual information. As illustrated in Fig. 1, ccMIM consists of five components:
• Contextual Patch Modeling. Given an input image, ccMIM first divides the patches into visible and masked sets by contextual information. Patches in the masked set will have richer contextual information to improve the difficulty of MIM. Both two sets will be fed to the backbone.
• Encoder architecture. Encoder backbone extracts a latent feature representation for the masked image, which is then used to predict the original signals at the masked area. The learned encoder is expected to be transferable to various vision tasks. In this paper, following MAE (He et al., 2021), we mainly consider ViT (Dosovitskiy et al., 2020) as backbones.
• Projector head. The projector head is applied in the contrastive branch, which forces the encoder to learn the global invariance of two sets (masked area and visible area). Specifically, we perform unidirectional alignment (Hjelm et al., 2018; Chen & He, 2021) on two sets (i.e., align visible set with masked set), since the masked set has richer contextual information.
• Decoder. The decoder will be applied to the latent feature representation of the visible set (with less contextual information) to produce one form of the original signals of the masked areas.
• Prediction target. This component defines the form of original signals to predict. It can be either the raw pixel values or a transformation of the raw pixels (HOG features).
In the following, we elaborate on the details of our approach.
Selecting and dividing strategy. We denote the image as X ∈ RC×H×W . Under the ViTbased (Dosovitskiy et al., 2020) MIM paradigm, we first split each image to several patches X ∈ Rp 2×C×Hp × W p . Then, each patch will be mapped to one vector through a linear layer W ∈ RC H p W p ×D, where D is the hidden dimension. Inspired by the transformer (Vaswani et al., 2017) in NLP, ViT adds one global token to extract global information of each image. Denote the i-th patch as xi and let x0 be global token. Then, we can calculate the importance of the patches xi by:
s = Softmax ( Eh [ q0K ⊤ 1:√
D
]) (4)
where Eh means taking the average attention value of multiple heads (Vaswani et al., 2017). q0 means the query vector of the global token, which can be calculated by q0 = xoWQ and WQ refers to the learnable weights. K1: is the key matrix (Vaswani et al., 2017) excluding the first row (global token), which is calculated by K = XWK and WK is the learnable weights. After obtaining the importance of the patches, we can selectively divide the patches into two sets, i.e., masked set M and visible set V , where M mainly includes the patches with larger importance and V is composed of the remaining patches. The main rationale behind this design is that some background patches (less attention value) may still have a little contextual information, which may be helpful for learning contextual and fine-grained information for downstream tasks (e.g., fine-grained classification, detection). Denote the probability density function of the importance is f(x) and let p(x) be a random distribution. Note π(x) is the distribution of function f on p. Then, for the expectation of f(x) on π, we have:
E [f ] = ∫ x π(x)f(x)dx = ∫ x p(x) π(x) p(x) f(x)dx (5)
where π(x)p(x) is the importance sampling weights. For the new distribution π(x) p(x) f(x), we have:
V arx∼p
[ f(x) π(x)
p(x)
] = Ex∼p [( f(x) π(x)
p(x)
)2] − ( Ex∼p [ f(x) π(x)
p(x) ])2 = Ex∼π [ f(x)2 π(x)
p(x)
] − (Ex∼π [f(x)])2
(6)
where π(x)p(x) can be obtained by the observed importance values (calculated by self-attention) and random generator. Then, we can divide the patches into M and V sets by importance sampling. Visible and masked patches alignment. For the importance masking strategy, one of the assumptions is the global token [CLS] can learn the global information of the whole image. However, for the previous methods like MAE (He et al., 2021), there’s no regularization added on the [CLS] token. Hence, directly using [CLS] to partition patches may still be random. To address this problem, we add another objective on the [CLS] token, i.e. contrastive loss (InfoNCE (Chen et al., 2021) or l2 regularization (Chen & He, 2021)). Denote the global representations ([CLS] token) in contrastive space of masked set and visible set as HM and HV , respectively. Then, the NCE-liked objective is:
LNCE = −EV [ log
esim(SG(h M i ),h V i )/τ∑N
j=1,j ̸=i e sim(hVi ,h V j )/τ + ∑N j=1 e sim(SG(hMi ),h V j )/τ
] (7)
where SG(·) is the stop gradient operation. Compared with SimCLR-based objective, Eq. 7 only contrasts the visible branch. This design is because the masked set mainly includes the important patches, which have richer semantic information than the visible set. Hence, hM can be regarded as the teacher to guide the hV (student). We can also only add the alignment term on the [CLS] token like SimSiam (Chen & He, 2021) and BYOL (Grill et al., 2020), which can be written as:
Lalign = E [ 1− sim(StopGrad(hMi ), p(hVi )) ] (8)
where sim(x,y) = x ⊤y
∥x∥2∥y∥2 in this paper and p(·) is the projector head (Chen & He, 2021).
Image reconstruction and contrastive learning. We adopt a standard The masked patch reconstruction protocol in ccMIM is similar to MAE, by predicting the pixel values for patches in the masked set. Each element in the decoder’s output is a vector of pixel values representing a patch. The
last layer of the decoder is a prediction target W ∈ RD× H p · W p . Then, in line with MAE, the mean squared error (MSE) is computed between the reconstructed and original patches in the pixel space:
Lrecons = 1
Ω(M) ∥x̂M − xM∥2 (9)
where Ω(M) is the number of elements in the masked set M. The overall objective of ccMIM is: LccMIM = {
Lrecons + λ · Lalign, for using asymmetric framework Lrecons + λ · LNCE , for using negative samples (10)
where λ is the hyper-parameter to balance contrastive loss and reconstruction loss. Note that only using either of the contrastive loss (Eq. 7 and Eq. 8) is enough for regularizing the [CLS] token. We mainly use LNCE by default in our main experiments which we find can be even more stable. The comparison of the two objectives is given in the ablation study.
Remark. Note that though we also adopt the linear loss as in Eq. 10, while the reconstruction procedure is aided by the contrasting in a loop, which makes our method differs from other combination methods (Chen et al., 2022; Yi et al., 2022; Wang et al., 2022) without such synergetic connections.
3.3 METHODOLOGY COMPARISON
Note in Table 6 in Appendix, we compare recent SSL pretraining methods. In particular, here we elaborate on the connections to the most related methods CAE, MST, Repre, and SemMAE.
Relation to CAE (Chen et al., 2022). Both CAE and ccMIM partition the raw image into two sets, i.e., visible set and mask set. However, CAE divides them randomly, which is similar to MAE (He et al., 2021), limiting their performance. ccMIM partitions the two sets by importance sampling. Moreover, ccMIM adds another contrastive loss to regularize [CLS] token to help partition.
Relation to MST (Li et al., 2021). Both MST and ccMIM use contrastive loss and reconstruction pretext task and selective strategy. However, the motivations of MST and ccMIM are completely different. MST mainly relies on the contrastive loss to learn the invariance of two views and reconstruction loss only works a little bit. Besides, they mask the unimportant patches, which only slightly influences their performance (Note that the random mask strategy will cause about 10% Top-1 accuracy drop). In contrast, ccMIM masks the important patches to enhance the difficulty of reconstruction. Besides, ccMIM does not require multiple views from tricky augmentation.
Relation to Repre (Wang et al., 2022). Repre is the other method using both contrastive loss and reconstruction loss. However, Repre also requires two views from tricky augmentation and the main gain is coming from the contrastive objectives. Besides, the reconstruction branch is similar to VAE (Kingma & Welling, 2013), which takes all patches as input without any masking strategy.
Relation to SemMAE (Li et al., 2022). SemMAE is another concurrent work about attentive masking. Both ccMIM and SemMAE mask patches with richer contextual information to increase the difficulty of reconstruction. SemMAE uses backbone pretrained by iBOT (Zhou et al., 2022) to help to select contextual regions, where iBOT takes 1600 extra epochs for pretraining, which is computation-cost. In contrast, ccMIM directly adds alignment loss on [CLS] token to learn global information to select masked regions, which does not require extra information and time.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Dataset. We perform self-supervised pre-training on the ImageNet-1K (Deng et al., 2009) training set, as commonly used in SSL (He et al., 2021; Zhang et al., 2021). It includes 1k classes which are well-balanced in distribution and the images contain an iconic view of objects. We evaluate our pretrained model on the ImageNet-1k validation set, as well as on MS-COCO (Lin et al., 2014) object detection and segmentation, ADE20K (Zhou et al., 2017) segmentation, and other small-scale classification datasets (Van Horn et al., 2018).
Backbone. We use the standard ViTs architecture (Dosovitskiy et al., 2020). It has several blocks composed of multi-head self-attention (MHSA) (Vaswani et al., 2017), MLP, and LayerNorm (Ba et al., 2016). ccMIM adds one [CLS] token at both masked set and visible set (shared). The decoder
of ccMIM follows the settings of MAE (He et al., 2021), which includes two-layer MHSA blocks. At the evaluation stage, the average embedding of patches is used for finetuning and linear probing.
Evaluation protocols. i) Linear probing is widely used as a proxy of pretraining quality evaluation for SSL (Chen et al., 2020b; He et al., 2021; Chen et al., 2022). It learns a linear classifier over the image-level representation output from the pretrained encoder by using the labels of the images, and then tests the performance on the validation set. ii) Fine tuning is often used to evaluate the backbone in reconstructed-based methods (He et al., 2021; Chen et al., 2022). We use the same hyper-parameter with MAE, i.e., 100 epochs with learning rate 1e-3 and 1024 batch size. iii) Downstream tasks across different datasets and tasks (detection, segmentation, fine-grained classification) are also used to evaluate the transferability of the pre-trained encoder.
4.2 MAIN RESULTS
Linear probing and finetune on ImageNet-1K. Table 1 shows the linear probing and fine-tuning accuracy on ImageNet-1k dataset. We mainly compare our method with transformer-based contrastive learning methods (Caron et al., 2021; Chen et al., 2021) and MIM-based (He et al., 2021; Xie et al., 2021) methods. For 300 epochs pretraining, our ccMIM outperforms baseline MAE +0.8% top-1 accuracy. For 800 epochs pretraining, ccMIM achieves 84.2% top-1 accuracy, outperforms MAE +0.6% accuracy with 1600 epochs pretraining. For the concurrent method ConMIM, it uses two views by non-trivial augmentation in the pre-training step, i.e., multiple views are required to enable contrastive objectives. Although ccMIM also has the contrastive objective, it only requires a single view. Besides, ConMIM gets 83.7% accuracy with 800 epochs pretraining, only improves 0.2% accuracy on the basis of 300 epochs pretraining. The reason may be that the contrastive objective in ConMIM only improves the convergence speed but not the final performance. In contrast, ccMIM gets 84.2% (0.6% higher than MAE and ccMIM with 300 epochs pretraining) top-1 accuracy in 800 epochs pretraining, which may be because the contrastive objective in ccMIM is to help select contextual patches and the final gain is coming from recovering patches with richer contextual information (ConMIM randomly masks patches with less contextual information). Besides, we also find ccMIM surpasses MAE +4.5% and +2.1% top-1 accuracy under linear probing protocol. We guess that’s because ccMIM adds the global-discriminative objective (see ablation study in Sec. 4.3).
Object detection and segmentation on MS-COCO. Setups. Following CAE (Chen et al., 2022), we adopt Mask R-CNN (He et al., 2017) that produces bounding boxes and instance masks simultaneously, with the ViT as the backbone (See the supplementary file for training details). We apply the same object detection system to the methods. We report the box AP for object detection and the mask AP for instance segmentation. We utilize multi-scale training and resize the image with the size of the short side between 480 and 800 and the longer side no larger than 1,333. The batch size is 32. For ViT-S, the learning rate is 3e-4, the layer-wise decay rate is 0.75, and the drop path rate is 0.1. For ViT-B, the learning rate is 3e-4, the layer-wise decay rate is 0.75, and the drop path
Semantic segmentation on ADE20K. Setups. We evaluate semantic segmentation performances of the pre-trained models on ADE20K (Zhou et al., 2017), which includes 150 fine-grained semantic categories and 25k training data. In line with iBOT (Zhou et al., 2022), we report the results by three metrics: mean intersection of union (mIoU) averaged over all semantic categories, all pixel accuracy (aAcc), and mean class accuracy (mAcc). We pretrain ccMIM with 800 epochs and finetune it in FPN (Lin et al., 2017) with batch size 16. For ViT-B, the layer-wise decay rate is 0.65, and the drop path rate is 0.1. For other methods, we download their pretrained models and finetune them in our setting for fair comparisons and Table 3 shows the finetune results. The proposed ccMIM gets state-of-the-art accuracy among all the contrastive-based and MIM-based methods. Specifically, for 800 epochs pretraining, ccMIM outperforms MAE and CAE +1.7% and +0.5% mIoU, respectively.
Transfer classification on other datasets. In line with previous SSL methods, we conduct a set of experiments to evaluate the transfer accuracy on fine-grained classification datasets (iNaturalist (Van Horn et al., 2018) and Flowers). We mainly compare our method with ClusterFit (Yan et al., 2020), SNCA+ (Wu et al., 2018), Grafit (Touvron et al., 2021b), iBOT (Zhou et al., 2022), DeiT (Touvron et al., 2021a) and MAE (He et al., 2021). We finetune the pretrained encoder on iNaturalist datasets in 360 epochs with the learning rate 3e-4. For all SSL methods, we choose ViT-B/16 for comparisons. The results in Table 4 show that ccMIM obtains the highest accuracy compared with previous supervised and self-supervised methods. Specifically, ccMIM gets 0.5%, 0.2% and 0.2% higher accuracy than MAE on iNaturalist2019, iNaturalist2018 and Flowers-102, respectively.
4.3 ABLATION STUDY
λ > 0.2, with the increase of λ, the accuracy drops. When
λ > 0.8, ccMIM with LNCE gets lower accuracy than MAE, which we think this phenomenon is because the gain of ccMIM is from contextual masking, and contrastive loss is only used for selection semantic-richer patches, but not for direct improvement. When λ = 0, there’s no regularization on [CLS] token, leading the lower accuracy than λ = 0.1 and λ = 0.2.
Dividing strategy 100 epochs 400 epochsLinear Finetune Linear Finetune random 33.6 79.4 63.1 82.9
block-wise 32.9 79.2 62.0 82.8 importance 31.1 78.8 66.8 83.6 importance sampling 41.1 80.5 67.4 83.8
each time. For each block, we set the minimum number of patches to 16. Then we randomly choose an aspect ratio for the masking block. We repeat the above two steps until obtaining enough masked patches. We distributed pretrain ccMIM on 32 GPUs in 100 and 400 epochs with 1024 batch size. We report linear and finetune accuracy in Table 5. In our experiments, the block-wise dividing strategy gets a bit lower accuracy than random dividing, which is in line with previous methods MAE (He et al., 2021) and SimMIM (Xie et al., 2021). We also find with 100 epochs pretraining, the importance-based dividing strategy gets much lower accuracy than random and block-wise methods. However, for 400 epochs pretraining, importance-based methods get higher accuracy than random and block-wise methods. We think that’s because the importance-based strategy will slow down the convergence speed (patches with richer contextual information are more difficult to reconstruct). Importance-sampling-based method outperforms other dividing methods in both 100 and 400 epochs pretraining. We guess that’s because, for the first few epochs, the variance of attention value of different patches is small. Thus, when adding the random sampling operation, ccMIM also divides some patches with less contextual information in the masked set. Then, the reconstruction task is not too difficult, leading to a faster convergence speed than the importance-based strategy. Then, for 400 epochs pretraining, [CLS] token could easily divide patches with richer contextual information into the masked set, leading to the higher accuracy (+4.3%) than random and block-wise methods.
5 CONCLUSION
We have presented a novel contextual MIM framework dubbed ccMIM, whereby masked patch selection is aided by a contrast between the masked patches and unmasked ones. The reconstruction difficulty is increased as the masked patches are more semantically rich than random sampling as done in peer MIM methods. The resulting combined approach shows notable performance gain in both convergence speed and accuracy compared to baselines on public benchmarks across different tasks from classification to dense tasks including detection and segmentation.
A METHODOLOGY COMPARISON
To better position our approach, we list recent contrastive and mask image modeling methods in Tab. 6.
B MASK ILLUSTRATION
For better understanding of our work, we illustrate how ccMIM divides the patches into visible and masked set in Fig. 3(a). We visualize four pretrained models, i.e., ccMIM with iteration 0, 1, 800 and importance-based dividing in Sec. 4.3. When Iter = 0, the [CLS] token is randomly initialized, ccMIM randomly divides the patches into masked set and visible set. After 800 epochs pretraining, ccMIM could divide patches with richer contextual information into mask set, and the remain patches with less contextual information are divided into visible set to increase the difficulty of reconstruction. We further illustrate reconstruction loss curve of MAE and ccMIM in first 50 epochs pretraining in Fig. 3(b). We can find the loss values of ccMIM is higher than MAE, which demonstrates the larger difficulty for reconstruction of ccMIM than MAE.
C IMPLEMENTATION DETALS
Pre-training hyper-parameters. In line with MAE, we use AdamW as optimizer with the base learning rate 1.5e-4 and weight decay 0.05. We set the balance ratio λ = 0.2 and use LNCE in Eq. 7 as regularization loss as default. For computational tractability, we set batch size as 1024, distributed pretraining on 32 NVIDIA 1080Ti GPUs. Similar to previous methods in SSL (Xie et al., 2021), we adopt cosine lr scheduler and only use the random resized crop for augmentation. | 1. What is the main contribution of the paper, and how does it improve upon previous methods?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its interaction with the CLS token and the importance ranking generated?
3. How does the reviewer assess the novelty and reproducibility of the paper's content?
4. Are there any concerns about the extra computation costs of feeding both visible and masked patches into the ViT encoder?
5. Would conducting the sampling strategy to contrastive learning methods produce consistent performance gains?
6. How do the experimental results support or limit the effectiveness of the proposed method, especially compared to larger architectures like ViT-L? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a contrasting-aided contextual masked image modeling framework, termed ccMIM. The main motivation behind this paper is to use CLS tokens to mask semantic patches so that the reconstruction target could be more difficult. To make the CLS token contains more semantic context, ccMIM introduces a global alignment loss (InfoNCE or Self-Distillation) on the CLS token. Lots of experiments are conducted to show the superiority of ccMIM.
Strengths And Weaknesses
Strengths
The motivation makes sense. The paper writing is easy to understand.
Results on image classification / object detection / semantic segmentation all show improvements.
Weaknesses
As shown in Fig.1, the importance score of each patch is computed with the CLS token before feeding into ViT-Encoder. This means the CLS token has no interaction with image patches. So is the importance ranking generated from the CLS token really reliable? More descriptions and visualizations should be added for clear understanding.
Bringing contrastive learning to MIM is already well-studied in many works. The more important component in this paper is the importance-based sampling strategy. However, sampling semantic-less patches for harder self-supervised learning objectives is suitable for not only MIM but also contrastive learning methods. It's important to conduct such a sampling strategy to contrastive learning methods (e.g., BYOL [Grill et al., 2020] or SimCLR [Chen et al., 2020b]) and see if consistent performance gains could be achieved.
ccMIM feeds both visible patches and masked patches into ViT encoder. The extra computation costs are not mentioned in the paper.
Improvements compared to previous methods are a bit marginal.
Experiments on larger architectures (e.g., ViT-L) could bring more authority.
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper is easy to follow despite some details needing clarification (see Weaknesses).
Quality: The paper addresses a practical problem in an intuitive way, but the experimental results seem insufficient to fully validate the method.
Novelty: Brings contrastive learning to MIM is already well-studied in many works. The importance-based sampling strategy is new to the self-supervised learning community. But it should be better to discuss such a strategy with contrastive learning.
Reproducibility: Hyper-parameters and implementation details are all included in the paper. |
ICLR | Title
Contextual Image Masking Modeling via Synergized Contrasting without View Augmentation for Faster and Better Visual Pretraining
Abstract
We propose a new contextual masking image modeling (MIM) approach called contrasting-aided contextual MIM (ccMIM), under the MIM paradigm for visual pretraining. Specifically, we adopt importance sampling to select the masked patches with richer semantic information for reconstruction, instead of random sampling as done in previous MIM works. As such, the resulting patch reconstruction task from the remaining less semantic patches could be more difficult and helps to learn. To speed up the possibly slowed convergence due to our more difficult reconstruction task, we further propose a new contrastive loss that aligns the tokens of the vision transformer extracted from the selected masked patches and the remaining ones, respectively. The hope is that it serves as a regularizer for patch feature learning such that the image-level global information could be captured in both masked and unmasked patches, and notably such a single-view contrasting avoids the tedious image augmentation step required in recent efforts of introducing contrastive learning to MIM (to speedup convergence and discriminative ability). Meanwhile, the attention score from the contrastive global feature can also carry effective semantic clues to in turn guide our above masking patch selection scheme. In consequence, our contextual MIM and contrastive learning are synergetically performed in a loop (semantic patch selection-token alignment contrasting) to boost the best of the two worlds: fast convergence and strong performance on downstream tasks without ad-hoc augmentations, which are verified by empirical results on ImageNet-1K for both classification and dense vision tasks.
1 INTRODUCTION
Self-supervised learning (SSL) (Zbontar et al., 2021; Jin et al., 2022; Chen et al., 2020b) has been attracting increasing attention recently in deep learning, due to its label-free property and capability of learning rich holistic representations. Recent SSL methods mainly fall into two classes. Contrastive methods (He et al., 2020; Chen et al., 2020b; Zhang et al., 2021) construct multiple views of the given image to increase the variance and align the representations of these views in latent space. The pair-wise learning paradigm endows strong linear evaluation accuracy of contrastive methods. However, these methods tend to extract global information but ignore local information, which limits their performance on downstream dense vision tasks, e.g. detection, segmentation, etc. Besides, the two-view learning paradigm is usually sensitive to the choice of augmentation function (Chen et al., 2020c), batch size (Chen et al., 2020b; Zhang et al., 2021) and output dimension (Zbontar et al., 2021; Zhang et al., 2022) etc. With the success of recent ViT-based vision backbones, which divide images into several patches, Mask-Image-Modeling (MIM) methods (He et al., 2021; Fang et al., 2022; Chen et al., 2022) randomly mask some patches and use the self-attention mechanism to
∗Junchi Yan is the correspondence author. Rui Zhao is also with Qing Yuan Research Institute, Shanghai Jiao Tong University. This work was in part supported by NSFC (62222607), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102) and SenseTime Collaborative Research Grant.
recover pixel-wise information from the remaining un-masked patches. These MIM-based methods are shown can obtain a better pre-trained encoder even than contrastive approaches, and especially show strong performance on transfer learning and finetuning tasks thanks to the ability to capture local information. However, these methods can suffer from the limited discriminability for global representation with less competitive linear accuracy (Gao et al., 2022), slow convergence speed (1,600 epochs for MAE), and GPU-consuming (e.g. with batch size 4096).
Therefore, recent attempts (Wang et al., 2022; Chen et al., 2022; Yi et al., 2022) are devoted to combining contrastive learning and MIM. The hope is that both (global) discriminative information (by contrasting) and spatial-sensitive information (by MIM) can be captured. Although these approaches show improved convergence speed (in the sense of fewer epochs needed for convergence) compared with using MIM alone, they can hardly further boost the performance of MIM on downstream tasks given even more (e.g. 800, 1600) pretraining epochs. Meanwhile, such direct combing contrasting with MIM can also bring about side effects e.g. the increased sensitivity to data augmentation and more overhead at each training epoch. In this paper, we aim to improve both convergence speed and accuracy by devising a new contextual MIM scheme, which is synergistically aided by contrastive learning. The highlights of our proposed ccMIM are:
1) Novel framework for synergizing MIM and contrastive learning in a close-loop: We propose a novel contextual MIM framework by actively selecting the rich semantic patches as masking patches for reconstruction to improve the MIM learning difficulty, whereby the semantic clue is learned and measured by our devised vision transformer token alignment contrasting loss. As such, the synergizing of the two components can be fulfilled in the first place. In contrast, existing efforts on the combination of MIM and contrasting often perform these two components independently until their weighted loss is finally computed and show limited accuracy improvement (perhaps also partly due to their random patch selection strategy as observed in our ablation study in Sec. 4.3).
2) Cost-effective technical design for contextual MIM: Under the above framework, we propose to use importance sampling to contextually select the patches with richer semantic information for masking and reconstruction, to increase the learning difficulty and effectiveness. The selection is guided by the attentional semantic score derived from the [CLS] token alignment as contrastive learning between the selected and un-selected patches. Compared to the widely adopted two-view augmentation for contrasting in recent MIM works, our contrasting is efficiently performed within a single view (for speedup MIM training). Moreover, our way of using contrasting also directly guides the MIM learning instead of being two independent components as done in the existing literature.
3) Improvement over MAE: Using ViT-B (Dosovitskiy et al., 2020) as a standard protocol in MIM, ccMIM achieves 83.6%, 84.2% top-1 accuracy with 300 and 800 epochs pre-training, outperforming MAE (He et al., 2021) 0.8% (82.8% for 300 epochs) and 0.6% (83.6% for 1600 epochs) accuracy on ImageNet-1K, respectively. Source code will be made publicly available.
2 RELATED WORK
In our work, the proposed ccMIM divides image patches into two sets by information density by learning two objective functions. One is aligning the global representations of two sets and the other is reconstructing raw images from the visible set, so we briefly review contrastive learning (align representations of multiple augmented views in latent space) and the MIM-based methods (reconstruction).
Contrastive learning aims to learn instance discriminative representations to distinguish an image from the others (Hjelm et al., 2018). This is achieved by pulling together the representations of different views of the same image and pushing away the other images. Thus, most contrastive methods adopt siamese network (He et al., 2020; Grill et al., 2020; Chen & He, 2021). To create different views for the same image, well-designed data augmentation functions have been deployed (e.g., those investigated in SimCLR (Chen et al., 2020b)). To increase the number of negative pairs, Moco (He et al., 2020; Chen et al., 2020c) constructs a large queue to store negative representation in memory. Besides, Moco and BYOL use momentum and stop-gradient mechanisms are adopted to prevent degenerate solutions (Tian et al., 2021; Wang & Isola, 2020). To simplify BYOL, SimSiam (Chen & He, 2021) discards the momentum updating and finds stop-gradient and predictor head are the keys to preventing collapse. MoCo-v3 (Chen et al., 2021) and DINO (Caron et al., 2021) are based on the siamese network and extend MoCo (He et al., 2020) and BYOL (Grill et al., 2020) with Vision
Transformer (ViT) (Dosovitskiy et al., 2020) as their model backbones. Although contrastive learning methods provide discriminative features, most of them focus on learning global representations while lacking the spatial-sensitive and local representation that can be mitigated by the MIM techniques.
MIM-based methods (He et al., 2021; Xie et al., 2021) learn vision representation by reconstructing the masked patches from the partial observations. Based on the reconstruction objective, these methods can be divided into: pixel-wise reconstruction (He et al., 2021; Xie et al., 2021) and auxiliary features/tokens prediction (Dong et al., 2021; Chen et al., 2022). SimMIM (Xie et al., 2021) and MAE (He et al., 2021) are the first two methods applying mask modeling in the visual domain. They propose to reconstruct the raw pixel values from either the full set of image patches (mask tokens and visible patches for SimMIM) or partially observed patches (visible patches for MAE). Compared with SimMIM, MAE is more efficient because of dropping out a large portion of input patches. Inspired by adversarial training (Goodfellow et al., 2014), CIM (Fang et al., 2022) adds perturbations to raw images to enhance robustness for reconstruction. While GreenMIM (Huang et al., 2022) applies divide-and-conquer with hierarchical ViT backbones (Liu et al., 2021). Departure from predicting on RGB space, iBOT (Zhou et al., 2022) and AttMask (Kakogeorgiou et al., 2022) proposed to randomly and selectively (by semantic importance) mask patches and perform contrasts in latent space, respectively.
Combination of Contrasting and MIM has been recently explored with the hope to achieve both fast convergence and good accuracy. For instance, CAE (Chen et al., 2022) randomly partitions the image into two sets of patches and utilizes the visible set to reconstruct the masked patches. Then, an alignment objective is added to two sets in latent space. However, the reconstruction loss and alignment loss (as contrasting) are independent of each other and linearly combined for training, which we argue it lacks a synergistic effect between the two modules. Similar issues also exist in other attempts (Yi et al., 2022; Wang et al., 2022) with a linear combination of the two losses, although faster convergence is observed yet with little improvement on the accuracy of downstream tasks. In this paper, we aim to propose a more synergetic approach for contrasting-aided MIM.
3 METHODOLOGY
3.1 PRELIMINARIES
We denote the set of N number of images X ∈ RN×C×H×W and an encoder f for pretraining. 1) Contrastive Learning: The encoders are usually quantified as CNNs and ViTs to learn global features Z ∈ RN×D from raw images. The raw images are firstly transformed by certain augmentation to generate two views. The key step is using encoders to learn the invariance of two views, i.e. maximizing the representation similarity of the two views (alignment term in (Chen et al., 2020b; Wang & Isola, 2020)). However, directly adding the alignment part will cause degenerate solutions (Tian et al., 2021; Wang & Isola, 2020), which means all the images are encoded to the same and collapsed representation. Two techniques have been developed to avoid degenerate solutions, i.e., adopting stop-gradient to build asymmetric framework (Grill et al., 2020; He et al., 2020; Chen et al., 2020c; Caron et al., 2021; Zhou et al., 2022) and using negative samples (Chen et al., 2020b; He et al., 2020; Chen et al., 2020c), which will be used in the two versions of our method, which are respectively based on: the contrastive forms of SimSiam (Chen & He, 2021) and SimCLR (Chen et al., 2020b).
i) Adopting the asymmetric framework: the objective of SimSiam can be formulated as:
Lsimsiam = 2− sim(StopGrad(HA), p(HB))− sim(StopGrad(HB), p(HA)) (1) where StopGrad(·) means stop-gradient operations and sim(·, ·) is the similarity metric function (cosine similarity in this paper). p(·) is the predictor head (two layers MLP), which is commonly used in previous asymmetric methods (Chen & He, 2021; Grill et al., 2020). HA and HB denotes the representations of view A and view B in contrastive space, which can be obtained by H = g(Z) and g is the projector head composed of two-layer MLP (LeCun et al., 2015).
ii) Adopting the negative samples: accordingly SimCLR (Chen et al., 2020b) adopts the negativebased objective: (InfoNCE (Hjelm et al., 2018)):
Lsimclr = −EA,B [ log
esim(h A i ,h B i )/τ∑N
j=1,j ̸=i e sim(hAi ,h A j )/τ + ∑N j=1 e sim(hAi ,h B j )/τ
] (2)
Method 50 epochs (linear) 100 epochs (linear) 100 epochs (fine tune)
Top-1 Top-5 Top-1 Top-5 Top-1 Top-5
MAE 33.656 57.292 33.69 57.292 79.432 94.396
IMAE 30.852 54.104 31.300 54.452 79.024 93.237
ccMIM 30.02 53.384 41.01 65.896 80.498 94.839
where hAi (h B i ) means the representation in contrastive space of view A (B) of image i.
2) Masking Image Modeling: the common embodiment of encoders are transformer-based models, e.g. vallina ViT (Dosovitskiy et al., 2020) and Swin Transformer (Liu et al., 2021). Current MIMbased methods (He et al., 2021; Chen et al., 2022; Xie et al., 2021) basically split raw images into several patches, and randomly select patches to mask. The remaining patches are used for reconstructing the masked patches x̂M , and the pixel-wise objective is added on masked prediction:
Lrecons = 1
Ω(x̂M ) ∥x̂M − T (xM )∥p (3)
where Ω(x̂M ) is the number of masked patches and p means the p-norm (p = 2 in this paper). T is a transformation function (for MAE, T is the identity function; for MaskFeat (Wei et al., 2021), T is the HOG transformation). In this paper, we mainly follow the settings in MAE (He et al., 2021).
3.2 THE PROPOSED CCMIM: CONTRASTING-AIDED CONTEXTUAL MIM
In our proposed ccMIM, the patches are divided into two sets: masked set (composed of patches with richer contextual information) and visible set (with less contextual information, e.g. background). The visible set is used for reconstruction, and the contrastive objective is regularized on the global tokens of the two sets to help divide patches by contextual information. As illustrated in Fig. 1, ccMIM consists of five components:
• Contextual Patch Modeling. Given an input image, ccMIM first divides the patches into visible and masked sets by contextual information. Patches in the masked set will have richer contextual information to improve the difficulty of MIM. Both two sets will be fed to the backbone.
• Encoder architecture. Encoder backbone extracts a latent feature representation for the masked image, which is then used to predict the original signals at the masked area. The learned encoder is expected to be transferable to various vision tasks. In this paper, following MAE (He et al., 2021), we mainly consider ViT (Dosovitskiy et al., 2020) as backbones.
• Projector head. The projector head is applied in the contrastive branch, which forces the encoder to learn the global invariance of two sets (masked area and visible area). Specifically, we perform unidirectional alignment (Hjelm et al., 2018; Chen & He, 2021) on two sets (i.e., align visible set with masked set), since the masked set has richer contextual information.
• Decoder. The decoder will be applied to the latent feature representation of the visible set (with less contextual information) to produce one form of the original signals of the masked areas.
• Prediction target. This component defines the form of original signals to predict. It can be either the raw pixel values or a transformation of the raw pixels (HOG features).
In the following, we elaborate on the details of our approach.
Selecting and dividing strategy. We denote the image as X ∈ RC×H×W . Under the ViTbased (Dosovitskiy et al., 2020) MIM paradigm, we first split each image to several patches X ∈ Rp 2×C×Hp × W p . Then, each patch will be mapped to one vector through a linear layer W ∈ RC H p W p ×D, where D is the hidden dimension. Inspired by the transformer (Vaswani et al., 2017) in NLP, ViT adds one global token to extract global information of each image. Denote the i-th patch as xi and let x0 be global token. Then, we can calculate the importance of the patches xi by:
s = Softmax ( Eh [ q0K ⊤ 1:√
D
]) (4)
where Eh means taking the average attention value of multiple heads (Vaswani et al., 2017). q0 means the query vector of the global token, which can be calculated by q0 = xoWQ and WQ refers to the learnable weights. K1: is the key matrix (Vaswani et al., 2017) excluding the first row (global token), which is calculated by K = XWK and WK is the learnable weights. After obtaining the importance of the patches, we can selectively divide the patches into two sets, i.e., masked set M and visible set V , where M mainly includes the patches with larger importance and V is composed of the remaining patches. The main rationale behind this design is that some background patches (less attention value) may still have a little contextual information, which may be helpful for learning contextual and fine-grained information for downstream tasks (e.g., fine-grained classification, detection). Denote the probability density function of the importance is f(x) and let p(x) be a random distribution. Note π(x) is the distribution of function f on p. Then, for the expectation of f(x) on π, we have:
E [f ] = ∫ x π(x)f(x)dx = ∫ x p(x) π(x) p(x) f(x)dx (5)
where π(x)p(x) is the importance sampling weights. For the new distribution π(x) p(x) f(x), we have:
V arx∼p
[ f(x) π(x)
p(x)
] = Ex∼p [( f(x) π(x)
p(x)
)2] − ( Ex∼p [ f(x) π(x)
p(x) ])2 = Ex∼π [ f(x)2 π(x)
p(x)
] − (Ex∼π [f(x)])2
(6)
where π(x)p(x) can be obtained by the observed importance values (calculated by self-attention) and random generator. Then, we can divide the patches into M and V sets by importance sampling. Visible and masked patches alignment. For the importance masking strategy, one of the assumptions is the global token [CLS] can learn the global information of the whole image. However, for the previous methods like MAE (He et al., 2021), there’s no regularization added on the [CLS] token. Hence, directly using [CLS] to partition patches may still be random. To address this problem, we add another objective on the [CLS] token, i.e. contrastive loss (InfoNCE (Chen et al., 2021) or l2 regularization (Chen & He, 2021)). Denote the global representations ([CLS] token) in contrastive space of masked set and visible set as HM and HV , respectively. Then, the NCE-liked objective is:
LNCE = −EV [ log
esim(SG(h M i ),h V i )/τ∑N
j=1,j ̸=i e sim(hVi ,h V j )/τ + ∑N j=1 e sim(SG(hMi ),h V j )/τ
] (7)
where SG(·) is the stop gradient operation. Compared with SimCLR-based objective, Eq. 7 only contrasts the visible branch. This design is because the masked set mainly includes the important patches, which have richer semantic information than the visible set. Hence, hM can be regarded as the teacher to guide the hV (student). We can also only add the alignment term on the [CLS] token like SimSiam (Chen & He, 2021) and BYOL (Grill et al., 2020), which can be written as:
Lalign = E [ 1− sim(StopGrad(hMi ), p(hVi )) ] (8)
where sim(x,y) = x ⊤y
∥x∥2∥y∥2 in this paper and p(·) is the projector head (Chen & He, 2021).
Image reconstruction and contrastive learning. We adopt a standard The masked patch reconstruction protocol in ccMIM is similar to MAE, by predicting the pixel values for patches in the masked set. Each element in the decoder’s output is a vector of pixel values representing a patch. The
last layer of the decoder is a prediction target W ∈ RD× H p · W p . Then, in line with MAE, the mean squared error (MSE) is computed between the reconstructed and original patches in the pixel space:
Lrecons = 1
Ω(M) ∥x̂M − xM∥2 (9)
where Ω(M) is the number of elements in the masked set M. The overall objective of ccMIM is: LccMIM = {
Lrecons + λ · Lalign, for using asymmetric framework Lrecons + λ · LNCE , for using negative samples (10)
where λ is the hyper-parameter to balance contrastive loss and reconstruction loss. Note that only using either of the contrastive loss (Eq. 7 and Eq. 8) is enough for regularizing the [CLS] token. We mainly use LNCE by default in our main experiments which we find can be even more stable. The comparison of the two objectives is given in the ablation study.
Remark. Note that though we also adopt the linear loss as in Eq. 10, while the reconstruction procedure is aided by the contrasting in a loop, which makes our method differs from other combination methods (Chen et al., 2022; Yi et al., 2022; Wang et al., 2022) without such synergetic connections.
3.3 METHODOLOGY COMPARISON
Note in Table 6 in Appendix, we compare recent SSL pretraining methods. In particular, here we elaborate on the connections to the most related methods CAE, MST, Repre, and SemMAE.
Relation to CAE (Chen et al., 2022). Both CAE and ccMIM partition the raw image into two sets, i.e., visible set and mask set. However, CAE divides them randomly, which is similar to MAE (He et al., 2021), limiting their performance. ccMIM partitions the two sets by importance sampling. Moreover, ccMIM adds another contrastive loss to regularize [CLS] token to help partition.
Relation to MST (Li et al., 2021). Both MST and ccMIM use contrastive loss and reconstruction pretext task and selective strategy. However, the motivations of MST and ccMIM are completely different. MST mainly relies on the contrastive loss to learn the invariance of two views and reconstruction loss only works a little bit. Besides, they mask the unimportant patches, which only slightly influences their performance (Note that the random mask strategy will cause about 10% Top-1 accuracy drop). In contrast, ccMIM masks the important patches to enhance the difficulty of reconstruction. Besides, ccMIM does not require multiple views from tricky augmentation.
Relation to Repre (Wang et al., 2022). Repre is the other method using both contrastive loss and reconstruction loss. However, Repre also requires two views from tricky augmentation and the main gain is coming from the contrastive objectives. Besides, the reconstruction branch is similar to VAE (Kingma & Welling, 2013), which takes all patches as input without any masking strategy.
Relation to SemMAE (Li et al., 2022). SemMAE is another concurrent work about attentive masking. Both ccMIM and SemMAE mask patches with richer contextual information to increase the difficulty of reconstruction. SemMAE uses backbone pretrained by iBOT (Zhou et al., 2022) to help to select contextual regions, where iBOT takes 1600 extra epochs for pretraining, which is computation-cost. In contrast, ccMIM directly adds alignment loss on [CLS] token to learn global information to select masked regions, which does not require extra information and time.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Dataset. We perform self-supervised pre-training on the ImageNet-1K (Deng et al., 2009) training set, as commonly used in SSL (He et al., 2021; Zhang et al., 2021). It includes 1k classes which are well-balanced in distribution and the images contain an iconic view of objects. We evaluate our pretrained model on the ImageNet-1k validation set, as well as on MS-COCO (Lin et al., 2014) object detection and segmentation, ADE20K (Zhou et al., 2017) segmentation, and other small-scale classification datasets (Van Horn et al., 2018).
Backbone. We use the standard ViTs architecture (Dosovitskiy et al., 2020). It has several blocks composed of multi-head self-attention (MHSA) (Vaswani et al., 2017), MLP, and LayerNorm (Ba et al., 2016). ccMIM adds one [CLS] token at both masked set and visible set (shared). The decoder
of ccMIM follows the settings of MAE (He et al., 2021), which includes two-layer MHSA blocks. At the evaluation stage, the average embedding of patches is used for finetuning and linear probing.
Evaluation protocols. i) Linear probing is widely used as a proxy of pretraining quality evaluation for SSL (Chen et al., 2020b; He et al., 2021; Chen et al., 2022). It learns a linear classifier over the image-level representation output from the pretrained encoder by using the labels of the images, and then tests the performance on the validation set. ii) Fine tuning is often used to evaluate the backbone in reconstructed-based methods (He et al., 2021; Chen et al., 2022). We use the same hyper-parameter with MAE, i.e., 100 epochs with learning rate 1e-3 and 1024 batch size. iii) Downstream tasks across different datasets and tasks (detection, segmentation, fine-grained classification) are also used to evaluate the transferability of the pre-trained encoder.
4.2 MAIN RESULTS
Linear probing and finetune on ImageNet-1K. Table 1 shows the linear probing and fine-tuning accuracy on ImageNet-1k dataset. We mainly compare our method with transformer-based contrastive learning methods (Caron et al., 2021; Chen et al., 2021) and MIM-based (He et al., 2021; Xie et al., 2021) methods. For 300 epochs pretraining, our ccMIM outperforms baseline MAE +0.8% top-1 accuracy. For 800 epochs pretraining, ccMIM achieves 84.2% top-1 accuracy, outperforms MAE +0.6% accuracy with 1600 epochs pretraining. For the concurrent method ConMIM, it uses two views by non-trivial augmentation in the pre-training step, i.e., multiple views are required to enable contrastive objectives. Although ccMIM also has the contrastive objective, it only requires a single view. Besides, ConMIM gets 83.7% accuracy with 800 epochs pretraining, only improves 0.2% accuracy on the basis of 300 epochs pretraining. The reason may be that the contrastive objective in ConMIM only improves the convergence speed but not the final performance. In contrast, ccMIM gets 84.2% (0.6% higher than MAE and ccMIM with 300 epochs pretraining) top-1 accuracy in 800 epochs pretraining, which may be because the contrastive objective in ccMIM is to help select contextual patches and the final gain is coming from recovering patches with richer contextual information (ConMIM randomly masks patches with less contextual information). Besides, we also find ccMIM surpasses MAE +4.5% and +2.1% top-1 accuracy under linear probing protocol. We guess that’s because ccMIM adds the global-discriminative objective (see ablation study in Sec. 4.3).
Object detection and segmentation on MS-COCO. Setups. Following CAE (Chen et al., 2022), we adopt Mask R-CNN (He et al., 2017) that produces bounding boxes and instance masks simultaneously, with the ViT as the backbone (See the supplementary file for training details). We apply the same object detection system to the methods. We report the box AP for object detection and the mask AP for instance segmentation. We utilize multi-scale training and resize the image with the size of the short side between 480 and 800 and the longer side no larger than 1,333. The batch size is 32. For ViT-S, the learning rate is 3e-4, the layer-wise decay rate is 0.75, and the drop path rate is 0.1. For ViT-B, the learning rate is 3e-4, the layer-wise decay rate is 0.75, and the drop path
Semantic segmentation on ADE20K. Setups. We evaluate semantic segmentation performances of the pre-trained models on ADE20K (Zhou et al., 2017), which includes 150 fine-grained semantic categories and 25k training data. In line with iBOT (Zhou et al., 2022), we report the results by three metrics: mean intersection of union (mIoU) averaged over all semantic categories, all pixel accuracy (aAcc), and mean class accuracy (mAcc). We pretrain ccMIM with 800 epochs and finetune it in FPN (Lin et al., 2017) with batch size 16. For ViT-B, the layer-wise decay rate is 0.65, and the drop path rate is 0.1. For other methods, we download their pretrained models and finetune them in our setting for fair comparisons and Table 3 shows the finetune results. The proposed ccMIM gets state-of-the-art accuracy among all the contrastive-based and MIM-based methods. Specifically, for 800 epochs pretraining, ccMIM outperforms MAE and CAE +1.7% and +0.5% mIoU, respectively.
Transfer classification on other datasets. In line with previous SSL methods, we conduct a set of experiments to evaluate the transfer accuracy on fine-grained classification datasets (iNaturalist (Van Horn et al., 2018) and Flowers). We mainly compare our method with ClusterFit (Yan et al., 2020), SNCA+ (Wu et al., 2018), Grafit (Touvron et al., 2021b), iBOT (Zhou et al., 2022), DeiT (Touvron et al., 2021a) and MAE (He et al., 2021). We finetune the pretrained encoder on iNaturalist datasets in 360 epochs with the learning rate 3e-4. For all SSL methods, we choose ViT-B/16 for comparisons. The results in Table 4 show that ccMIM obtains the highest accuracy compared with previous supervised and self-supervised methods. Specifically, ccMIM gets 0.5%, 0.2% and 0.2% higher accuracy than MAE on iNaturalist2019, iNaturalist2018 and Flowers-102, respectively.
4.3 ABLATION STUDY
λ > 0.2, with the increase of λ, the accuracy drops. When
λ > 0.8, ccMIM with LNCE gets lower accuracy than MAE, which we think this phenomenon is because the gain of ccMIM is from contextual masking, and contrastive loss is only used for selection semantic-richer patches, but not for direct improvement. When λ = 0, there’s no regularization on [CLS] token, leading the lower accuracy than λ = 0.1 and λ = 0.2.
Dividing strategy 100 epochs 400 epochsLinear Finetune Linear Finetune random 33.6 79.4 63.1 82.9
block-wise 32.9 79.2 62.0 82.8 importance 31.1 78.8 66.8 83.6 importance sampling 41.1 80.5 67.4 83.8
each time. For each block, we set the minimum number of patches to 16. Then we randomly choose an aspect ratio for the masking block. We repeat the above two steps until obtaining enough masked patches. We distributed pretrain ccMIM on 32 GPUs in 100 and 400 epochs with 1024 batch size. We report linear and finetune accuracy in Table 5. In our experiments, the block-wise dividing strategy gets a bit lower accuracy than random dividing, which is in line with previous methods MAE (He et al., 2021) and SimMIM (Xie et al., 2021). We also find with 100 epochs pretraining, the importance-based dividing strategy gets much lower accuracy than random and block-wise methods. However, for 400 epochs pretraining, importance-based methods get higher accuracy than random and block-wise methods. We think that’s because the importance-based strategy will slow down the convergence speed (patches with richer contextual information are more difficult to reconstruct). Importance-sampling-based method outperforms other dividing methods in both 100 and 400 epochs pretraining. We guess that’s because, for the first few epochs, the variance of attention value of different patches is small. Thus, when adding the random sampling operation, ccMIM also divides some patches with less contextual information in the masked set. Then, the reconstruction task is not too difficult, leading to a faster convergence speed than the importance-based strategy. Then, for 400 epochs pretraining, [CLS] token could easily divide patches with richer contextual information into the masked set, leading to the higher accuracy (+4.3%) than random and block-wise methods.
5 CONCLUSION
We have presented a novel contextual MIM framework dubbed ccMIM, whereby masked patch selection is aided by a contrast between the masked patches and unmasked ones. The reconstruction difficulty is increased as the masked patches are more semantically rich than random sampling as done in peer MIM methods. The resulting combined approach shows notable performance gain in both convergence speed and accuracy compared to baselines on public benchmarks across different tasks from classification to dense tasks including detection and segmentation.
A METHODOLOGY COMPARISON
To better position our approach, we list recent contrastive and mask image modeling methods in Tab. 6.
B MASK ILLUSTRATION
For better understanding of our work, we illustrate how ccMIM divides the patches into visible and masked set in Fig. 3(a). We visualize four pretrained models, i.e., ccMIM with iteration 0, 1, 800 and importance-based dividing in Sec. 4.3. When Iter = 0, the [CLS] token is randomly initialized, ccMIM randomly divides the patches into masked set and visible set. After 800 epochs pretraining, ccMIM could divide patches with richer contextual information into mask set, and the remain patches with less contextual information are divided into visible set to increase the difficulty of reconstruction. We further illustrate reconstruction loss curve of MAE and ccMIM in first 50 epochs pretraining in Fig. 3(b). We can find the loss values of ccMIM is higher than MAE, which demonstrates the larger difficulty for reconstruction of ccMIM than MAE.
C IMPLEMENTATION DETALS
Pre-training hyper-parameters. In line with MAE, we use AdamW as optimizer with the base learning rate 1.5e-4 and weight decay 0.05. We set the balance ratio λ = 0.2 and use LNCE in Eq. 7 as regularization loss as default. For computational tractability, we set batch size as 1024, distributed pretraining on 32 NVIDIA 1080Ti GPUs. Similar to previous methods in SSL (Xie et al., 2021), we adopt cosine lr scheduler and only use the random resized crop for augmentation. | 1. What is the focus and contribution of the paper on MIM and contrastive learning?
2. What are the strengths of the proposed approach, particularly in terms of its novel framework and improvement over MAE?
3. What are the weaknesses of the paper, especially regarding the extraction of the mask set feature and the comparison with the original MAE?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
1, Adopt importance sampling to select the masked patches with richer semantic information for reconstruction, instead of random sampling as done in previous MIM works. 2, Propose a new contrastive loss that aligns the tokens of the vision transformer extracted from the selected masked patches and the remaining ones. 3, The proposed contextual MIM and contrastive learning are synergetically performed in a loop with fast convergence and strong performance on downstream tasks without ad-hoc augmentations
Strengths And Weaknesses
Strength 1, Novel framework for synergizing MIM and contrastive learning in a close-loop. 2, Improvement over MAE: Better than MAE with shorter training epochs. Weaknesses 1, Do you need to extract the mask set feature? Will that slow down the training? 2, Why the s means the importance of the patches? Will the global token have global information at the beginning of the training? 3, Can you compare the real training time? Your algorithm seems to be much more complicated than the original MAE. (thus More time over one epoch). 4, Which token do you use for the downstreaming classifcation task?
Clarity, Quality, Novelty And Reproducibility
Clarity, Quality, Novelty, and Reproducibility are OK. |
ICLR | Title
How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers
Abstract
Many optimizers have been proposed for training deep neural networks, and they often have multiple hyperparameters, which make it tricky to benchmark their performance. In this work, we propose a new benchmarking protocol to evaluate both end-to-end efficiency (training a model from scratch without knowing the best hyperparameter) and data-addition training efficiency (the previously selected hyperparameters are used for periodically re-training the model with newly collected data). For end-to-end efficiency, unlike previous work that assumes random hyperparameter tuning, which over-emphasizes the tuning time, we propose to evaluate with a bandit hyperparameter tuning strategy. A human study is conducted to show that our evaluation protocol matches human tuning behavior better than the random search. For data-addition training, we propose a new protocol for assessing the hyperparameter sensitivity to data shift. We then apply the proposed benchmarking framework to 7 optimizers and various tasks, including computer vision, natural language processing, reinforcement learning, and graph mining. Our results show that there is no clear winner across all the tasks.
1 INTRODUCTION
Due to the enormous data size and non-convexity, stochastic optimization algorithms have become widely used in training deep neural networks. In addition to Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951), many variations such as Adagrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2014) have been proposed. Unlike classical, hyperparameter free optimizers such as gradient descent and Newton’s method1, stochastic optimizers often hold multiple hyperparameters including learning rate and momentum coefficients. Those hyperparameters are critical not only to the speed, but also to the final performance, and are often hard to tune.
It is thus non-trivial to benchmark and compare optimizers in deep neural network training. And a benchmarking mechanism that focuses on the peak performance could lead to a false sense of improvement when developing new optimizers without considering tuning efforts. In this paper, we aim to rethink the role of hyperparameter tuning in benchmarking optimizers and develop new benchmarking protocols to reflect their performance in practical tasks better. We then benchmark seven recently proposed and widely used optimizers and study their performance on a wide range of tasks. In the following, we will first briefly review the two existing benchmarking protocols, discuss their pros and cons, and then introduce our contributions.
Benchmarking performance under the best hyperparameters A majority of previous benchmarks and comparisons on optimizers are based on the best hyperparameters. Wilson et al. (2017); Shah et al. (2018) made a comparison of SGD-based methods against adaptive ones under their best hyperparameter configurations. They found that SGD can outperform adaptive methods on several datasets under careful tuning. Most of the benchmarking frameworks for ML training also assume knowing the best hyperparameters for optimizers (Schneider et al., 2019; Coleman et al., 2017; Zhu et al., 2018). Also, the popular MLPerf benchmark evaluated the performance of optimizers under the best hyperparameter. It showed that ImageNet and BERT could be trained in 1 minute using the combination of good optimizers, good hyperparameters, and thousands of accelerators.
1The step sizes of gradient descent and Newton’s method can be automatically adjusted by a line search procedure (Nocedal & Wright, 2006).
Despite each optimizer’s peak performance being evaluated, benchmarking under the best hyperparameters makes the comparison between optimizers unreliable and fails to reflect their practical performance. First, the assumption of knowing the best hyperparameter is unrealistic. In practice, it requires a lot of tuning efforts to find the best hyperparameter, and the tuning efficiency varies greatly for different optimizers. It is also tricky to define the “best hyperparameter”, which depends on the hyperparameter searching range and grids. Further, since many of these optimizers are sensitive to hyperparameters, some improvements reported for new optimizers may come from insufficient tuning for previous work.
Benchmarking performance with random hyperparameter search It has been pointed out in several papers that tuning hyperparameter needs to be considered in evaluating optimizers (Schneider et al., 2019; Asi & Duchi, 2019), but having a formal evaluation protocol on this topic is nontrivial. Only recently, two papers Choi et al. (2019) and Sivaprasad et al. (2020) take hyperparameter tuning time into account when comparing SGD with Adam/Adagrad. However, their comparisons among optimizers are conducted on random hyperparameter search. We argue that these comparisons could over-emphasize the role of hyperparameter tuning, which could lead to a pessimistic and impractical performance benchmarking for optimizers. This is due to the following reasons: First, in the random search comparison, each bad hyperparameter has to run fully (e.g., 200 epochs). In practice, a user can always stop the program early for bad hyperparameters if having a limited time budget. For instance, if the learning rate for SGD is too large, a user can easily observe that SGD diverges in a few iterations and directly stops the current job. Therefore, the random search hypothesis will over-emphasize the role of hyperparameter tuning and does not align with a real user’s practical efficiency. Second, the performance of the best hyperparameter is crucial for many applications. For example, in many real-world applications, we need to re-train the model every day or every week with newly added data. So the best hyperparameter selected in the beginning might benefit all these re-train tasks rather than searching parameters from scratch. In addition, due to the expensive random search, random search based evaluation often focuses on the low-accuracy region2, while practically we care about the performance for getting reasonably good accuracy.
Our contributions Given that hyperparameter tuning is either under-emphasized (assuming the best hyperparameters) or over-emphasize (assuming random search) in existing benchmarking protocols and comparisons, we develop new evaluation protocols to compare optimizers to reflect the real use cases better. Our evaluation framework includes two protocols. First, to evaluate the end-to-end training efficiency for a user to train the best model from scratch, we develop an efficient evaluation protocol to compare the accuracy obtained under various time budgets, including the hyperparameter tuning time. Instead of using the random search algorithm, we adopt the Hyperband (Li et al., 2017) algorithm for hyperparameter tuning since it can stop early for bad configurations and better reflect the real running time required by a user. Further, we also propose to evaluate the data addition training efficiency for a user to re-train the model with some newly added training data, with the knowledge of the best hyperparameter tuned in the previous training set. We also conduct human studies to study how machine learning researchers are tuning parameters in optimizers and how that aligns with our proposed protocols.
Based on the proposed evaluation protocols, we study how much progress has recently proposed algorithms made compared with SGD or Adam. Note that most of the recent proposed optimizers have shown outperforming SGD and Adam under the best hyperparameters for some particular tasks, but it is not clear whether the improvements are still significant when considering hyper-parameter tuning, and across various tasks. To this end, we conduct comprehensive experiments comparing state-of-the-art training algorithms, including SGD (Robbins & Monro, 1951), Adam (Kingma & Ba, 2014), RAdam (Liu et al., 2019), Yogi (Zaheer et al., 2018), LARS (You et al., 2017), LAMB (You et al., 2019), and Lookahead (Zhang et al., 2019), on a variety of training tasks including image classification, generated adversarial networks (GANs), sentence classification (BERT fine-tuning), reinforcement learning and graph neural network training. Our main conclusions are: 1) On CIFAR-10 and CIFAR-100, all the optimizers including SGD are competitive. 2) Adaptive methods are generally better on more complex tasks (NLP, GCN, RL). 3) There is no clear winner among adaptive methods. Although RAdam is more stable than Adam across tasks, Adam is still a very competitive baseline even compared with recently proposed methods.
2For instance, Sivaprasad et al. (2020) only reaches < 50% accuracy in their CIFAR-100 comparisons.
2 RELATED WORK
Optimizers. Properties of deep learning make it natural to apply stochastic first order methods, such as Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951). Severe issues such as a zig-zag training trajectory and a uniform learning rate have been exposed, and researchers have then drawn extensive attention to modify the existing SGD for improvement. Along this line of work, tremendous progresses have been made including SGDM (Qian, 1999), Adagrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014). These methods utilize momentums to stabilize and speed up training procedures. Particularly, Adam is regarded as the default algorithm due to its outstanding compatibility. Then variants such as Amsgrad (Reddi et al., 2019), Adabound (Luo et al., 2019), Yogi (Zaheer et al., 2018), and RAdam (Liu et al., 2019) have been proposed to resolve different drawbacks of Adam. Meanwhile, the requirement of large batch training has inspired the development of LARS (You et al., 2017) and LAMB (You et al., 2019). Moreover, Zhang et al. (2019) has put forward a framework called Lookahead to boost optimization performance by iteratively updating two sets of weights.
Hyperparameter tuning methods. Random search and grid search (Bergstra & Bengio, 2012) can be a basic hyperparameter tuning method in the literature. However, the inefficiency of these methods stimulates the development of more advanced search strategies. Bayesian optimization methods including Bergstra et al. (2011) and Hutter et al. (2011) accelerate random search by fitting a black-box function of hyperparameter and the expected objective to adaptively guide the search direction. Parallel to this line of work, Hyperband (Li et al., 2017) focuses on reducing evaluation cost for each configuration and early terminates relatively worse trials. Falkner et al. (2018) proposes BOHB to combine the benefits of both Bayesian Optimization and Hyperband. All these methods still require huge computation resources. A recent work (Metz et al., 2020) has tried to obtain a list of potential hyperparameters by meta-learning from thousands of representative tasks. We strike a balance between effectiveness and computing cost and leverage Hyperband in our evaluation protocol to compare a wider range of optimizers.
3 PROPOSED EVALUATION PROTOCOLS
In this section, we introduce the proposed evaluation framework for optimizers. We consider two evaluation protocols, each corresponding to an important training scenario:
• Scenario I (End-to-end training): This is the general training scenario, where a user is given an unfamiliar optimizer and task, the goal is to achieve the best validation performance after several trials and errors. In this case, the evaluation needs to include hyperparameter tuning time. We develop an efficiency evaluation protocol to compare various optimizers in terms of CPE and peak performance. • Scenario II (Data-addition training): This is another useful scenario encountered in many applications, where the same model needs to be retrained regularly after collecting some fresh data. In this case, a naive solution is to reuse the previously optimal hyperparameters and retrain the model. However, since the distribution is shifted, the result depends on the sensitivity to that shift.
We describe the detailed evaluation protocol for each setting in the following subsections.
3.1 END-TO-END TRAINING EVALUATION PROTOCOL
Before introducing our evaluation protocol for Scenario I, we first formally define the concept of optimizer and its hyperparameters. Definition 1. An optimizer is employed to solve a minimization problem minθ L(θ) and can be defined by a tuple o ∈ O = (U ,Ω), where O contains all types of optimizers. U is a specific update rule and Ω = (ω1, . . . , ωN ) ∈ RN represents a vector ofN hyperparameters. Search space of these hyperparameters is denoted by F . Given an initial parameter value θ0, together with a trajectory of optimization procedure Ht = {θs,L(θs),∇L(θs)}, the optimizer updates θ by
θt+1 = U(Ht,Ω).
We aim to evaluate the end-to-end time for a user to get the best model, including the hyperparameter tuning time. A recent work (Sivaprasad et al., 2020) assumes that a user conducts random search for finding the best hyperparameter setting. Still, we argue that the random search procedure will over-emphasize the importance of hyperparameters when tuning is considered — it assumes a user
never stops the training even if they observe divergence or bad results in the initial training phase, which is unrealistic.
Figure 1 illustrates why random search might not lead to a fair comparison of optimizers. In Figure 1, we are given two optimizers, A and B, and their corresponding loss w.r.t. hyperparameter. According to Sivaprasad et al. (2020), optimizer B is considered better than optimizer A under a constrained budget since most regions of the hyperparameter space of A outperforms B. For instance, suppose we randomly sample the same hyperparamter setting for A and B. The final config ω∗r (B) found under this strategy can have a lower expected loss than that of ω∗r (A), as shown in Figure 1a. However, there exists a more practical search strategy which can invalidate this statement with the assumption of a limited searching budget: a user can early terminate a configuration trial when trapped in bad results or diverging. Hence, we can observe in Figure 1b that for optimizer A, this strategy earlystops many configurations and only allow a limited number of trials to explore to the deeper stage. Therefore, the bad hyperparameters will not affect the overall efficiency of Optimizer A too much. In contrast, for optimizer B, performances of different hyperparameters are relatively satisfactory and hard to distinguish, resulting in similar and long termination time for each trial. Therefore, it may be easier for a practical search strategy p to find the best configuration ω∗p(A) of optimizer A than ω∗p(B), given the same constrained budget.
This example suggests that random search may over-emphasize the parameter sensitivity when benchmarking optimizers. To better reflect a practical hyperparameter tuning scenario, our evaluation assumes a user applies Hyperband (Li et al., 2017), a simple but effective hyperparameter tuning scheme to get the best model. Hyperband formulates hyperparameter optimization as a unique bandit problem. It accelerates random search through adaptive resource allocation and earlystopping, as demonstrated in Figure 1b. Compared with its more complicated counterparts such as BOHB (Falkner et al., 2018), Hyperband requires less computing resources and performs similarly within a constrained budget. The algorithm is presented in Appendix A.
Despite different hyperparameter tuning algorithms, human tuning by experts is still regarded as the most effective. To verify and support that Hyperband is an effective method and even competitive with humans, we conduct a human study as follows: for image classification on CIFAR10, given 10 learning rate configurations of SGD in the grid [1.0 × 10−8, 1.0 × 10−7, 1.0 × 10−6, . . . , 10], participants are requested to search the best one at their discretion. Namely, they can stop or pause a trial any time and continue to evaluate a new configuration until they feel it has already reached the best performance. 10 participants are sampled randomly from Ph.D. students with computer science backgrounds.
We collect their tuning trajectories and average them as human performance, which is considered “optimal” in this human study. In Figure 2, we plot curves for hyperparameter tuning of human, Hyperband, random search, random search with an early stopping (ES) strategy in Sivaprasad et al. (2020), and Hyperband with ES. We find that Hyperband matches humans’ behavior better, while random search tends to trap in suboptimal configurations although random with early stopping can mitigate this issue to some extent. This finding shows the advantage of Hyperband over random
search regardless of early stopping, and justifies the use of Hyperband in optimizer benchmarking. More details of this human study can be found in Appedix B.
With Hyperband incorporated in end-to-end training, we assume that each configuration is run sequentially and record the best performance obtained at time step t as Pt. Specifically, Pt represents the evaluation metric for each task, e.g., accuracy for image classification and return for reinforcement learning. {Pt}Tt=1 forms a trajectory for plotting learning curves on test set like Figure 3. Although it is intuitive to observe the performance of different optimizers according to such figures, summarizing a learning curve into a quantifiable, scalar value can be more insightful for evaluation. Thus, as shown in Eq. 1, we use λ-tunability defined in Sivaprasad et al. (2020) to further measure the performance of optimizers:
λ-tunability = ∑T
t=1 λt · Pt, where ∑ t λt = 1 and ∀tλt > 0. (1)
One intuitive way is to set λt = 1t=T to determine which optimizer can reach the best model performance after the whole training procedure. However, merely considering the peak performance is not a good guidance on the choice of optimizers. In practice, we tend to take into account the complete trajectory and exert more emphasis on the early stage. Thus, we employ the Cumulative Performance-Early weighting scheme where λt ∝ (T − i), to compute λ-tunablity instead of the extreme assignment λt = 1t=T . The value obtained is termed as CPE for simplicity.
We present our evaluation protocol in Algorithm 1. As we can see, end-to-end training with hyperparameter optimization is conducted for various optimizers on the given task. The trajectory {Pt}Tt=1 is recorded to compute the peak performance as well as CPE value. Note that the procedure is repeated M times to obtain a reliable result. We use M = 3 in all experiments. More details of time cost and acceleration of the algorithm can be found in Appendix E.
Algorithm 1 End-to-End Efficiency Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A, feasible search space F
1: for o ∈ O do 2: for i = 1 to M do 3: Conduct hyperparameter search in F with the optimizer o using HyperBand on a 4: Record the performance trajectory {Pt}Tt=1 explored by HyperBand 5: Calculate the peak performance and CPE by Eq. 1 6: end for 7: Average peak and CPE values over M repetitions for the optimizer o 8: end for 9: Evaluate optimizers according to their peak and CPE values
3.2 DATA-ADDITION TRAINING EVALUATION PROTOCOL
In Scenario II, we have a service (e.g., a search or recommendation engine) and we want to re-train the model every day or every week with some newly added training data. One may argue that an online learning algorithm should be used in this case, but in practice online learning is unstable and industries still prefer this periodically retraining scheme which is more stable.
In this scenario, once the best hyperparameters were chosen in the beginning, we can reuse them for every training, so no hyperparameter tuning is required and the performance (including both efficiency and test accuracy) under the best hyperparameter becomes important. However, an implicit assumption made in this process is that “the best hyperparameter will still work when the training task slightly changes”. This can be viewed as transferability of hyperparameters for a particular optimizer, and our second evaluation protocol aims to evaluate this practical scenario.
We simulate data-addition training with all classification tasks, and the evaluation protocol works as follows: 1) Extract a subset Dδ containing partial training data from the original full dataset D with a small ratio δ; 2) Conduct a hyperparameter search on Dδ to find the best setting under this scenario; 3) Use these hyperparameters to train the model on the complete dataset; 4) Observe the potential change of the ranking of various optimizers before and after data addition. For step 4) when comparing different optimizers, we will plot the training curve in the full-data training stage in Section 4, and also summarize the training curve using the CPE value. The detailed evaluation protocol is described in Algorithm 2.
Algorithm 2 Data-Addition Training Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A with a full dataset D, a split ratio δ
1: for o ∈ O do 2: for i = 1 to M do 3: Conduct hyperparameter search with the optimizer o using Hyperband on awith a partial
dataset Dδ , and record the best hyperparameter setting Ωpartial found under this scenario 4: Apply the optimizer with Ωpartial on Dδ and D, then save the training curves 5: end for 6: Average training curves of o over M repetitions to compute CPE 7: end for 8: Compare performance of different optimizers under data-addition training
4 EXPERIMENTAL RESULTS
Optimizers to be evaluated. As shown in Table 1, we consider 7 optimizers including nonadaptive methods using only the first-order momentum, and adaptive methods considering both firstorder and second-order momentum. We also provide lists of tunable hyperparameters for different optimizers in Table 1. Moreover, we consider following two combinations of tunable hyperparameters to better investigate the performance of different optimizers: a) only tuning initial learning rate with the others set to default values and b) tuning a full list of hyperparameters. A detailed description of optimizers as well as default values and search range of these hyperparameters can be found in Appendix E. Note that we adopt a unified search space for a fair comparison following Metz et al. (2020), to eliminate biases of specific ranges for different optimizers. The tuning budget of Hyperband is determined by three items: maximum resource (in this paper we use epoch) per configuration R, reduction factor η, and number of configurations nc. According to Li et al. (2017), a single Hyperband execution contains ns = blogη(R)c + 1 of SuccessiveHalving, each referred to as a bracket. These brackets take strategies from least to most aggressive early-stopping, and each one is designed to use approximately B = R · ns resources, leading to a finite total budget. The number of randomly sampled configurations in one Hyperband run is also fixed and grows with R. Then given R and η, nc determines the repetition times of Hyperband. We set η = 3 as this default value performs consistently well, and R to a value which each task usually takes for a complete run. For nc, it is assigned as what is required for a single Hyperband execution for all tasks, except for BERT fine-tuning, where a larger number of configurations is necessary due to a relatively small R. In Appendix E, we give assigned values of R, η, and nc for each task.
Tasks for benchmarking. For a comprehensive and reliable assessment of optimizers, we consider a wide range of tasks in different domains. When evaluating end-to-end training efficiency, we implement our protocol on tasks covering several popular and promising applications in Table 2. Apart from common tasks in computer vision and natural language processing, we introduce two extra tasks in graph neural network training and reinforcement learning. For simplicity, we will use the dataset to represent each task in our subsequent tables of experimental results. (For the reinforcement learning task, we just use the environment name.) The detailed settings and parameters for each task can be found in Appendix D.
4.1 END-TO-END EFFICIENCY (SECNARIO I)
To evaluate end-to-end training efficiency, we adopt the protocol in Algorithm 1. Specifically, we record the average training trajectory with Hyperband {Pt}Tt=1 for each optimizer on benchmarking tasks, where Pt is the evaluation metric for each task (e.g., accuracy, return). We visualize these trajectories in Figure 3 for CIFAR10 and CIFAR100, and calculate CPE and peak performance in Table 3 and 9 respectively. More results for other tasks and the peak performance can be found in Appendix F. Besides, in Eq. 2 we compute performance ratio ro,a for each optimizer and each task, and then utilize the distribution function of a performance metric called performance profile ρo(τ) to summarize the performance of different optimizers over all the tasks. For tasks where a lower CPE is better, we just use ro,a = CPEo,a/min{CPE o,a} instead to guarantee ro,a ≥ 1. The function ρo(τ) for all optimizers is presented in Figure 4. Based on the definition of performance profile (Dolan & Moré, 2002), the optimizers with large probability ρo(τ) are to be preferred. In particular, the value of ρo(1) is the probability that one optimizer will win over the rest and can be a reference for selecting the proper optimizer for an unknown task. We also provided a probabilistic performance profile to summarize different optimizers in Figure 7 in Appendix F.
ro,a = max{CPEo,a : o ∈ O}
CPEo,a , ρo(τ) =
1 |A| size { a ∈ A : ro,a ≤ τ } . (2)
Our findings are summarized below:
• It should be emphasized from Table 3 and 9, that under our protocol based on Hyperband, SGD performs similarly to Adam in terms of efficiency as well as peak performance, and can even surpass it in some cases like training on CIFAR100. Under Hyperband, the best configuration of SGD is less tedious to find than random search because Hyperband can early-stop bad runs and thus they will affect less to the search efficiency and final performance. • For image classification tasks all the methods are competitive, while adaptive methods tend to perform better in more complicated tasks (NLP, GCN, RL). • There is no significant distinction among adaptive variants. Performance of adaptive optimizers tends to fall in the range within 1% of the best result. • According to performance profile in Figure 4, the RAdam achieves probability 1 with the smallest τ , and Adam is the second method achieving that. This indicates that RAdam and Adam are achieving relatively stable and consistent performance among these tasks.
4.2 DATA-ADDITION TRAINING (SCENARIO II)
We then conduct evaluation on data-addition training based on the protocol in Algorithm 2. We choose four classification problems on CIFAR10, CIFAR100, MRPC and PPI since the setting do not apply to RL. We search the best hyperparameter configuration, denoted by Ωpartial, under the sub training set with the ratio δ = 0.3. Here we tune all hyperparameters. Then we directly apply Ωpartial on the full dataset for a complete training process. The training curves are shown in Figure 5, and we also summarize the training curve with CPE by Eq. 1 in Table 4. We have the following findings:
• There is no clear winner in data-addition training. RAdam is outperforming other optimizers in 2/4 tasks so is slightly preferred, but other optimizers except Lookahead are also competitive (within 1% range) on at least 2/4 tasks. • To investigate whether the optimizer’s ranking will change when adding 70% data, we compare the training curve on the original 30% data versus the training curve on the full 100% data in Figure 5. We observe that the ranking of optimizers slightly changes after data addition.
5 CONCLUSIONS AND DISCUSSIONS
In conclusion, we found there is no strong evidence that newly proposed optimizers consistently outperform Adam, while each of them may be good for some particular tasks. When deciding the choice of the optimizer for a specific task, people can refer to results in Table 3 and 9. If the task is contained in Table 2, he/she can directly choose the one with the best CPE or best peak performance based on his/her goal of the task (easy to tune or high final performance). On the other hand, even though the desired task is covered, people can also gain some insights from the results of the most similar task in Table 2, or refer to the performance profile in Figure 4 to pick adaptive methods like Adam. Besides choosing the optimizer, it will contribute to designing a new optimizer as well. Using our protocol to evaluate a new optimizer can show whether it has obvious improvement over existing optimizers, and can serve as a routine to judge the performance of the optimizer thoroughly.
In addition to the proposed two evaluation criteria, there could be other factors that affect the practical performance of an optimizer. First, the memory consumption is becoming important for training large DNN models. For instance, although Lookahead performs well in certain tasks, it requires more memory than other optimizers, restricting their practical use in some memory constrained applications. Another important criterion is the scalability of optimizers. When training with a massively distributed system, optimizing the performance of a large batch regime (e.g., 32K batch size for ImageNet) is important. LARS and LAMB algorithms included in our study are developed for large batch training. We believe it is another important metric for comparing optimizers worth studying further.
A HYPERBAND
We present the whole algorithm for Hyperband in Algorithm 3, and you can refer to Li et al. (2017) for more details.
Algorithm 3 Hyperband Input: R, η Initialization: smax = blogη Rc , B = (smax + 1)R
1: for s ∈ {smax, smax − 1, . . . , 0} do 2: n = dBR ηs (s+1)e, r = Rη −s 3: // begin SuccessiveHalving with (n, r) inner loop 4: T = random get configuration(n) 5: for i ∈ {0, . . . , s} do 6: ni = bnη−ic, ri = rηi 7: L = {run then return val loss(t, ri): t ∈ T} 8: T = top k(T, L, bni/ηc) 9: end for
10: end for return Hyperparameter configuration with the smallest loss seen so far
B DETAILS OF HUMAN STUDY
In this human study, 10 participants are sampled from Ph.D. students with computer science backgrounds (machine learning backgrounds specifically). They are recruited as follows: We first asked the administrators to distribute the program of this human study to Ph.D. students in machine learning labs in our institutions. Provided the population, we assumed that they had prior knowledge about some basic machine learning experiments, such as image classification on MNIST and CIFAR10. They were requested to conduct hyperparameter tuning of learning rate based on their knowledge. They were informed that the target experiment was image classification on CIFAR10 with SGD, and the search range of learning rate was in the grid [1.0 × 10−8, 1.0 × 10−7, 1.0 × 10−6, . . . , 10]. Each configuration was run for 200 epochs at most. Moreover, we also told them that they could pause any configuration if they wanted to evaluate others, and even stop the whole tuning process only when they thought the accuracy could not be further improved and the number of total tuning epochs exceeded 600 at the same time. We collected results from 17 people and we
determined the validity by checking whether the length of the trajectory was greater than 600 and removed 7 invalidity trajectories. Finally there remained 10 trajectories and we averaged them as the human performance in Figure 2.
C OPTIMIZERS
Notations. Given a vector of parameters θ ∈ Rd, we denote a sub-vector of its i-th layer’s parameters by θ(i). {αt}Tt=1 is a sequence of learning rates during the optimization procedure of a horizon T . {φt, ψt}Tt=1 represents a sequence of functions to calculate the first-order and second-order momentum of the gradient gt, which aremt and vt respectively at time step t. Different optimization algorithms are usually specified by the choice of φ(·) and ψ(·). {rt}Tt=1 is an additional sequence of adaptive terms to modify the magnitude of the learning rate in some methods. For algorithms only using the first-order momentum, µ is the , while β1 and β2 are coefficients to compute the running averages m and v. is a small scalar (e.g., 1× 10−8) used to prevent division by 0. Generic optimization framework. Based on, we further develop a thorough generic optimization framework including an extra adaptive term in Algorithm 4. The debiasing term used in the original version of Adam is ignored for simplicity. Note that for {αt}Tt=1, different learning rate scheduling strategies can be adopted and the choice of scheduler is also regarded as a tunable hyperparameter. Without loss of generality, in this paper, we only consider a constant value and a linear decay (Shallue et al., 2018) in the following equation, introducing γ as a hyperparameter.
αt = { α0, constant; α0 − (1− γ)α0 tT , linear decay.
With this generic framework, we can summarize several popular optimization methods by explicitly specifying mt, vt and rt in Table 5. It should be clarified that Lookahead is an exception of the generic framework. In fact it is more like a high-level mechanism, which can be incorporated with any other optimizer. However, as stated in Zhang et al. (2019), this optimizer is robust to inner optimization algorithm, k, and αs in Algorithm 5, we still include Lookahead here with Adam as the base for a more convincing and comprehensive evaluation. We consider Lookahead as a special adaptive method, and tune the same hyperparamters for it as other adaptive optimziers.
Algorithm 4 Generic framework of optimization methods Input: parameter value θ1, learning rate with scheduling {αt}, sequence of functions {φt, ψt, χt}Tt=1 to compute mt, vt, and rt respectively.
1: for t = 1 to T do 2: gt = ∇ft(θt) 3: mt = φt(g1, · · · , gt) 4: vt = ψt(g1, · · · , gt) 5: rt = χt(θt,mt, vt) 6: θt+1 = θt − αtrtmt/ √ vt 7: end for
Algorithm 5 Lookahead Optimizer Input: Initial parameters θ0, objective function f , synchronization period k, slow weights step size αs, optimizer A
1: for t = 1, 2, . . . do 2: Synchronize parameters θ̂t,0 ← θt−1 3: for i = 1, 2, . . . , k do 4: Sample minibatch of data d ∈ D 5: θ̂t,i ← θ̂t,i−1 +A(L, θ̂t,i−1, d) 6: end for 7: Perform outer update θt ← θt−1 + αs(θ̂t,k − θt−1) 8: end for
return Parameters θ
D TASK DESCRIPTION
We make a concrete description of tasks selected for our optimizer evaluation protocol:
• Image classifcation. For this task, we adopt a ResNet-50 (He et al., 2016) model on CIFAR10 and CIFAR100 with a batch size of 128 and the maximum epoch of 200 per trial. • VAE. We use a vanilla variational autoencoder in Kingma & Welling (2013) with five convolutional and five deconvolutional layers with a latent space of dimension 128 on CelebA. There are no dropout layers. Trained with a batch size of 144. • GAN. We train SNGAN with the same network architecture and objective function with spectral normalization for CIFAR10 in Miyato et al. (2018), and the batch size of the generator and the discriminator is 128 and 64 respectively. • Natural language processing. In this domain, we finetune RoBERTa-base on MRPC, one of the test suit in GLUE benchmark. For each optimizer, we set the maximal exploration budget to be 800 epochs. The batch size is 16 sentences. • Graph learning. Among various graph learning problems, we choose node classification as semisupervised classification. In GCN training, in there are multiple ways to deal with the neighborhood explosion of stochastic optimizers. We choose Cluster-GCN Chiang et al. (2019) as the backbone to handle neighborhood expansion and PPI as the dataset. • Reinforcement learning. We select Walker2d-v3 from OpenAI Gym (Brockman et al. (2016)) as our training environment, and PPO (Schulman et al. (2017)), implemented by OpenAI SpinningUp (Achiam (2018)), as the algorithm that required tuning. We use the same architectures for both action value network Q and the policy network π. We define 40,000 of environment interactions as one epoch, with a batch size of 4,000. The return we used is the highest average test return of an epoch during the training.
E IMPLEMENTATION DETAILS
Implementation details of our experiments are provided in this section. Specifically, we give the unified search space for all hyperparamters and their default values in Table 6. Note that we tune the learning rate decay factor for image classification tasks when tuning every hyperparamter. For the task on MRPC, γ is tuned for all experiments. In other cases, we only tune original hyperparamters without a learning rate scheduler.
In addition, Hyperband parameter values for each task are listed in Table 7. These parameters are assigned based on properties of different tasks.
For time cost of our evaluation protocols, it depends on how many budgets are available. Specifically, in our paper, the unit of time budget is one epoch, then the total time will be Bepoch ∗ Tepoch, where Bepoch is the total available budget and Tepoch is the running time for one epoch. There is no additional computational cost, i.e., running our protocol once takes the same time as running one hyperparameter search with Hyperband. In our experiment on CIFAR10, we roughly evaluated 200 hyperparameter configurations in one Hyperband running, while the same time can only allow about 50 configurations in random search.
Moreover, we can further accelerate our evaluation protocol by resampling, shown in Algorithm 6. The basic idea is that we keep a library of different hyperparameter settings. At the beginning, the library is empty. And in each repetition, we sample a number of configurations required by running Hyperband once. During the simulation of Hyperband, we just retrieve the value from the library if the desired epoch of current configuration is contained in the library. Otherwise, we run this configuration based on Hyperband, and store the piece of the trajectory to the library.
Algorithm 6 Alternative End-to-End Efficiency Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A, search space F , library size S
1: for o ∈ O do 2: Sample S configurations from F , initialize the library with an empty list for each setting 3: for i = 1 to M do 4: Simulate Hyperband with o using configurations re-sampled from the library on a 5: if the desired accuracy is pre-computed in the library then 6: Retrieve the value directly 7: else 8: Training normally, and store values to the library 9: end if
10: Record the performance trajectory {Pt}Tt=1 explored by HyperBand 11: Calculate the peak performance and CPE by Eq. 1 12: end for 13: Average peak and CPE values over M repetitions for the optimizer o 14: end for 15: Evaluate optimizers according to their peak and CPE values
F ADDITIONAL RESULTS
More detailed experimental results are reported in this section.
F.1 IMPACT OF η
Since there is an extra hyperparameter, the reduction factor η in Hyperband, we conduct an experiment with different values (η = 2, 3, 4, 5) to observe the potential impact of this additional
hyperparameter on our evaluation. Specifically, we use Hyperband to tune the learning rate for three optimizers, SGD, Adam, and Lookahead on CIFAR10, and results are presented in Table 8. As we can see, although the change of η may lead to different CPE values, the relative ranking among three optimizers remains unchanged. Besides, they all achieve comparable peak performance at the end of training. Considering the efficiency of Hyperband, we choose η = 3 based on the convention in Li et al. (2017) in all our experiments.
F.2 END-TO-END TRAINING
Table 9 shows peak performance for optimizers on each task. For GAN, we only conduct evaluation on optimizers tuning learning rate due to time limit, and present its CPE and peak performance in Table 10. There is also an end-to-end training curve for GAN on CIFAR10 in Figure 6. Figures for end-to-end training curves on the rest of tasks are shown in Figure 9 and 10.
Optimizer GAN-CIFAR10↓ Tune learning rate only:
SGD 113.25 (50.08) Adam 77.04 (25.14)
RAdam 73.85 (19.61) Yogi 76.80 (25.36) LARS 157.71 (73.82) LAMB 68.47 (25.55)
Lookahead 65.61 (20.40)
Besides the general performance profile, we present a probabilistic version described in Barreto et al. (2010) in Figure 7 to account for randomness. The probabilistic performance profile can be formulated as
ρ̄o(τ) = 1 |A| ∑ a P(r̄o,a ≤ τ), r̄o,a ∼ N ( µo,a ba , σ2o,a b2a ), (3)
where µo,a and σo,a are the mean and standard deviation of CPE of the optimizer o on the task a respetively, and ba is just the best expectation of CPE on a among all optimizers. It can be seen in Figure 7 that the probabilistic performance profiles shows a similar trend to Figure 4.
We also attach two end-to-end training trajectories on CIFAR10 with the error in Figure 8. Since it is hard to distinguish optimizers with the standard deviation added, we just instead report the std of CPE and peak performance in Table 3 and 9.
F.3 DATA ADDITION TRAINING
We provide training curves for data-addition training on full MRPC and PPI dataset in Figure 11. | 1. What is the main contribution of the paper, and what are the two cases that the authors emphasize?
2. What are the strengths of the proposed evaluation framework, particularly in its ability to re-evaluate the role of hyperparameter tuning?
3. Do you have any concerns or questions about the algorithms used in the study, such as combining Algorithms 1 and 2?
4. How does the reviewer assess the effectiveness of the solution presented in the paper, and are there any suggestions for improving it?
5. What are the weaknesses of the paper, such as time cost and lack of clarity in certain explanations and formatting issues? | Review | Review
This paper mainly proposed an evaluation framework for optimizers in machine learning jobs. It points out that existing benchmarking often applies best hyperparameter or random search hyperparameter. Their proposed framework re-evaluates the role of hyperparameter tuning in machine learning. It mainly deals with two cases, end-to-end training efficiency and data-addition training efficiency. The major findings are as follows.
Random search might lead to unnecessary training when the loss does not converge. Therefore, given limited budgets, it is better to have a benchmark strategy for finding the best hyperparamater.
Training on the same model repeatedly is necessary when there are new data. However, the existing hyperparameters might not be optimal when training data updates. For end-to-end training efficiency, they assume users apply Hyperband and adopt \lambda-tunability to meansure the performance of optimizers. For every optimizer, the framework computes the CPE value based on the complete trajectory and evaluates the optimizers. For data-addition training efficiency, the framework extracts a subset to tune the hyperparameter and then apply them to the entire dataset and evaluate the performance of various optimizers.
Strengths:
The overall structure is clear with a detailed explanation of the limitation of the existing optimizer benchmark scheme. The two cases the authors emphasized are practical and common in real-life scenarios.
I think the idea of applying Hyperband instead of random searching is quite reasonable based on the result in figure 2.
The experiments are conducted based on a variety of datasets and optimizer. The findings are well presented with different angles, task type, optimizer mechanism. Questions:
It seems the algorithms 1 and 2 are similar. If it is possible to combine the algorithm together, like run the model on the subset to find several good optimizers and conduct it on the overall dataset.
I think overall the idea is good and easy to understand. However, I wonder if there is any way to support the effectiveness of the solution besides running different setups on the model.
Weaknesses:
I wonder the time cost of adopting this evaluation framework, like the time it needs to have a consolidated result on the optimizers' performance.
Some explanations regarding the details of the algorithms should be added beforehand. It is better to have an explanation of all the variables mentioned in the algorithm for clear referencing. For example, in the equation 1, the meaning of P, I assume it should be accuracy. Variable M, how it is decided and how it will affect the ultimate performance.
Some minor suggestions. There are some spelling mistakes in the paper. For example, in table 1, it should be non-adptive. And the format and location of figure and table could be improved. E.g. Figure 5 is mentioned in page 5 and located at page 8. It could be better to give a general picture of how the result looks like in the 3rd section first and then detailed experiment figures later. |
ICLR | Title
How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers
Abstract
Many optimizers have been proposed for training deep neural networks, and they often have multiple hyperparameters, which make it tricky to benchmark their performance. In this work, we propose a new benchmarking protocol to evaluate both end-to-end efficiency (training a model from scratch without knowing the best hyperparameter) and data-addition training efficiency (the previously selected hyperparameters are used for periodically re-training the model with newly collected data). For end-to-end efficiency, unlike previous work that assumes random hyperparameter tuning, which over-emphasizes the tuning time, we propose to evaluate with a bandit hyperparameter tuning strategy. A human study is conducted to show that our evaluation protocol matches human tuning behavior better than the random search. For data-addition training, we propose a new protocol for assessing the hyperparameter sensitivity to data shift. We then apply the proposed benchmarking framework to 7 optimizers and various tasks, including computer vision, natural language processing, reinforcement learning, and graph mining. Our results show that there is no clear winner across all the tasks.
1 INTRODUCTION
Due to the enormous data size and non-convexity, stochastic optimization algorithms have become widely used in training deep neural networks. In addition to Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951), many variations such as Adagrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2014) have been proposed. Unlike classical, hyperparameter free optimizers such as gradient descent and Newton’s method1, stochastic optimizers often hold multiple hyperparameters including learning rate and momentum coefficients. Those hyperparameters are critical not only to the speed, but also to the final performance, and are often hard to tune.
It is thus non-trivial to benchmark and compare optimizers in deep neural network training. And a benchmarking mechanism that focuses on the peak performance could lead to a false sense of improvement when developing new optimizers without considering tuning efforts. In this paper, we aim to rethink the role of hyperparameter tuning in benchmarking optimizers and develop new benchmarking protocols to reflect their performance in practical tasks better. We then benchmark seven recently proposed and widely used optimizers and study their performance on a wide range of tasks. In the following, we will first briefly review the two existing benchmarking protocols, discuss their pros and cons, and then introduce our contributions.
Benchmarking performance under the best hyperparameters A majority of previous benchmarks and comparisons on optimizers are based on the best hyperparameters. Wilson et al. (2017); Shah et al. (2018) made a comparison of SGD-based methods against adaptive ones under their best hyperparameter configurations. They found that SGD can outperform adaptive methods on several datasets under careful tuning. Most of the benchmarking frameworks for ML training also assume knowing the best hyperparameters for optimizers (Schneider et al., 2019; Coleman et al., 2017; Zhu et al., 2018). Also, the popular MLPerf benchmark evaluated the performance of optimizers under the best hyperparameter. It showed that ImageNet and BERT could be trained in 1 minute using the combination of good optimizers, good hyperparameters, and thousands of accelerators.
1The step sizes of gradient descent and Newton’s method can be automatically adjusted by a line search procedure (Nocedal & Wright, 2006).
Despite each optimizer’s peak performance being evaluated, benchmarking under the best hyperparameters makes the comparison between optimizers unreliable and fails to reflect their practical performance. First, the assumption of knowing the best hyperparameter is unrealistic. In practice, it requires a lot of tuning efforts to find the best hyperparameter, and the tuning efficiency varies greatly for different optimizers. It is also tricky to define the “best hyperparameter”, which depends on the hyperparameter searching range and grids. Further, since many of these optimizers are sensitive to hyperparameters, some improvements reported for new optimizers may come from insufficient tuning for previous work.
Benchmarking performance with random hyperparameter search It has been pointed out in several papers that tuning hyperparameter needs to be considered in evaluating optimizers (Schneider et al., 2019; Asi & Duchi, 2019), but having a formal evaluation protocol on this topic is nontrivial. Only recently, two papers Choi et al. (2019) and Sivaprasad et al. (2020) take hyperparameter tuning time into account when comparing SGD with Adam/Adagrad. However, their comparisons among optimizers are conducted on random hyperparameter search. We argue that these comparisons could over-emphasize the role of hyperparameter tuning, which could lead to a pessimistic and impractical performance benchmarking for optimizers. This is due to the following reasons: First, in the random search comparison, each bad hyperparameter has to run fully (e.g., 200 epochs). In practice, a user can always stop the program early for bad hyperparameters if having a limited time budget. For instance, if the learning rate for SGD is too large, a user can easily observe that SGD diverges in a few iterations and directly stops the current job. Therefore, the random search hypothesis will over-emphasize the role of hyperparameter tuning and does not align with a real user’s practical efficiency. Second, the performance of the best hyperparameter is crucial for many applications. For example, in many real-world applications, we need to re-train the model every day or every week with newly added data. So the best hyperparameter selected in the beginning might benefit all these re-train tasks rather than searching parameters from scratch. In addition, due to the expensive random search, random search based evaluation often focuses on the low-accuracy region2, while practically we care about the performance for getting reasonably good accuracy.
Our contributions Given that hyperparameter tuning is either under-emphasized (assuming the best hyperparameters) or over-emphasize (assuming random search) in existing benchmarking protocols and comparisons, we develop new evaluation protocols to compare optimizers to reflect the real use cases better. Our evaluation framework includes two protocols. First, to evaluate the end-to-end training efficiency for a user to train the best model from scratch, we develop an efficient evaluation protocol to compare the accuracy obtained under various time budgets, including the hyperparameter tuning time. Instead of using the random search algorithm, we adopt the Hyperband (Li et al., 2017) algorithm for hyperparameter tuning since it can stop early for bad configurations and better reflect the real running time required by a user. Further, we also propose to evaluate the data addition training efficiency for a user to re-train the model with some newly added training data, with the knowledge of the best hyperparameter tuned in the previous training set. We also conduct human studies to study how machine learning researchers are tuning parameters in optimizers and how that aligns with our proposed protocols.
Based on the proposed evaluation protocols, we study how much progress has recently proposed algorithms made compared with SGD or Adam. Note that most of the recent proposed optimizers have shown outperforming SGD and Adam under the best hyperparameters for some particular tasks, but it is not clear whether the improvements are still significant when considering hyper-parameter tuning, and across various tasks. To this end, we conduct comprehensive experiments comparing state-of-the-art training algorithms, including SGD (Robbins & Monro, 1951), Adam (Kingma & Ba, 2014), RAdam (Liu et al., 2019), Yogi (Zaheer et al., 2018), LARS (You et al., 2017), LAMB (You et al., 2019), and Lookahead (Zhang et al., 2019), on a variety of training tasks including image classification, generated adversarial networks (GANs), sentence classification (BERT fine-tuning), reinforcement learning and graph neural network training. Our main conclusions are: 1) On CIFAR-10 and CIFAR-100, all the optimizers including SGD are competitive. 2) Adaptive methods are generally better on more complex tasks (NLP, GCN, RL). 3) There is no clear winner among adaptive methods. Although RAdam is more stable than Adam across tasks, Adam is still a very competitive baseline even compared with recently proposed methods.
2For instance, Sivaprasad et al. (2020) only reaches < 50% accuracy in their CIFAR-100 comparisons.
2 RELATED WORK
Optimizers. Properties of deep learning make it natural to apply stochastic first order methods, such as Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951). Severe issues such as a zig-zag training trajectory and a uniform learning rate have been exposed, and researchers have then drawn extensive attention to modify the existing SGD for improvement. Along this line of work, tremendous progresses have been made including SGDM (Qian, 1999), Adagrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014). These methods utilize momentums to stabilize and speed up training procedures. Particularly, Adam is regarded as the default algorithm due to its outstanding compatibility. Then variants such as Amsgrad (Reddi et al., 2019), Adabound (Luo et al., 2019), Yogi (Zaheer et al., 2018), and RAdam (Liu et al., 2019) have been proposed to resolve different drawbacks of Adam. Meanwhile, the requirement of large batch training has inspired the development of LARS (You et al., 2017) and LAMB (You et al., 2019). Moreover, Zhang et al. (2019) has put forward a framework called Lookahead to boost optimization performance by iteratively updating two sets of weights.
Hyperparameter tuning methods. Random search and grid search (Bergstra & Bengio, 2012) can be a basic hyperparameter tuning method in the literature. However, the inefficiency of these methods stimulates the development of more advanced search strategies. Bayesian optimization methods including Bergstra et al. (2011) and Hutter et al. (2011) accelerate random search by fitting a black-box function of hyperparameter and the expected objective to adaptively guide the search direction. Parallel to this line of work, Hyperband (Li et al., 2017) focuses on reducing evaluation cost for each configuration and early terminates relatively worse trials. Falkner et al. (2018) proposes BOHB to combine the benefits of both Bayesian Optimization and Hyperband. All these methods still require huge computation resources. A recent work (Metz et al., 2020) has tried to obtain a list of potential hyperparameters by meta-learning from thousands of representative tasks. We strike a balance between effectiveness and computing cost and leverage Hyperband in our evaluation protocol to compare a wider range of optimizers.
3 PROPOSED EVALUATION PROTOCOLS
In this section, we introduce the proposed evaluation framework for optimizers. We consider two evaluation protocols, each corresponding to an important training scenario:
• Scenario I (End-to-end training): This is the general training scenario, where a user is given an unfamiliar optimizer and task, the goal is to achieve the best validation performance after several trials and errors. In this case, the evaluation needs to include hyperparameter tuning time. We develop an efficiency evaluation protocol to compare various optimizers in terms of CPE and peak performance. • Scenario II (Data-addition training): This is another useful scenario encountered in many applications, where the same model needs to be retrained regularly after collecting some fresh data. In this case, a naive solution is to reuse the previously optimal hyperparameters and retrain the model. However, since the distribution is shifted, the result depends on the sensitivity to that shift.
We describe the detailed evaluation protocol for each setting in the following subsections.
3.1 END-TO-END TRAINING EVALUATION PROTOCOL
Before introducing our evaluation protocol for Scenario I, we first formally define the concept of optimizer and its hyperparameters. Definition 1. An optimizer is employed to solve a minimization problem minθ L(θ) and can be defined by a tuple o ∈ O = (U ,Ω), where O contains all types of optimizers. U is a specific update rule and Ω = (ω1, . . . , ωN ) ∈ RN represents a vector ofN hyperparameters. Search space of these hyperparameters is denoted by F . Given an initial parameter value θ0, together with a trajectory of optimization procedure Ht = {θs,L(θs),∇L(θs)}, the optimizer updates θ by
θt+1 = U(Ht,Ω).
We aim to evaluate the end-to-end time for a user to get the best model, including the hyperparameter tuning time. A recent work (Sivaprasad et al., 2020) assumes that a user conducts random search for finding the best hyperparameter setting. Still, we argue that the random search procedure will over-emphasize the importance of hyperparameters when tuning is considered — it assumes a user
never stops the training even if they observe divergence or bad results in the initial training phase, which is unrealistic.
Figure 1 illustrates why random search might not lead to a fair comparison of optimizers. In Figure 1, we are given two optimizers, A and B, and their corresponding loss w.r.t. hyperparameter. According to Sivaprasad et al. (2020), optimizer B is considered better than optimizer A under a constrained budget since most regions of the hyperparameter space of A outperforms B. For instance, suppose we randomly sample the same hyperparamter setting for A and B. The final config ω∗r (B) found under this strategy can have a lower expected loss than that of ω∗r (A), as shown in Figure 1a. However, there exists a more practical search strategy which can invalidate this statement with the assumption of a limited searching budget: a user can early terminate a configuration trial when trapped in bad results or diverging. Hence, we can observe in Figure 1b that for optimizer A, this strategy earlystops many configurations and only allow a limited number of trials to explore to the deeper stage. Therefore, the bad hyperparameters will not affect the overall efficiency of Optimizer A too much. In contrast, for optimizer B, performances of different hyperparameters are relatively satisfactory and hard to distinguish, resulting in similar and long termination time for each trial. Therefore, it may be easier for a practical search strategy p to find the best configuration ω∗p(A) of optimizer A than ω∗p(B), given the same constrained budget.
This example suggests that random search may over-emphasize the parameter sensitivity when benchmarking optimizers. To better reflect a practical hyperparameter tuning scenario, our evaluation assumes a user applies Hyperband (Li et al., 2017), a simple but effective hyperparameter tuning scheme to get the best model. Hyperband formulates hyperparameter optimization as a unique bandit problem. It accelerates random search through adaptive resource allocation and earlystopping, as demonstrated in Figure 1b. Compared with its more complicated counterparts such as BOHB (Falkner et al., 2018), Hyperband requires less computing resources and performs similarly within a constrained budget. The algorithm is presented in Appendix A.
Despite different hyperparameter tuning algorithms, human tuning by experts is still regarded as the most effective. To verify and support that Hyperband is an effective method and even competitive with humans, we conduct a human study as follows: for image classification on CIFAR10, given 10 learning rate configurations of SGD in the grid [1.0 × 10−8, 1.0 × 10−7, 1.0 × 10−6, . . . , 10], participants are requested to search the best one at their discretion. Namely, they can stop or pause a trial any time and continue to evaluate a new configuration until they feel it has already reached the best performance. 10 participants are sampled randomly from Ph.D. students with computer science backgrounds.
We collect their tuning trajectories and average them as human performance, which is considered “optimal” in this human study. In Figure 2, we plot curves for hyperparameter tuning of human, Hyperband, random search, random search with an early stopping (ES) strategy in Sivaprasad et al. (2020), and Hyperband with ES. We find that Hyperband matches humans’ behavior better, while random search tends to trap in suboptimal configurations although random with early stopping can mitigate this issue to some extent. This finding shows the advantage of Hyperband over random
search regardless of early stopping, and justifies the use of Hyperband in optimizer benchmarking. More details of this human study can be found in Appedix B.
With Hyperband incorporated in end-to-end training, we assume that each configuration is run sequentially and record the best performance obtained at time step t as Pt. Specifically, Pt represents the evaluation metric for each task, e.g., accuracy for image classification and return for reinforcement learning. {Pt}Tt=1 forms a trajectory for plotting learning curves on test set like Figure 3. Although it is intuitive to observe the performance of different optimizers according to such figures, summarizing a learning curve into a quantifiable, scalar value can be more insightful for evaluation. Thus, as shown in Eq. 1, we use λ-tunability defined in Sivaprasad et al. (2020) to further measure the performance of optimizers:
λ-tunability = ∑T
t=1 λt · Pt, where ∑ t λt = 1 and ∀tλt > 0. (1)
One intuitive way is to set λt = 1t=T to determine which optimizer can reach the best model performance after the whole training procedure. However, merely considering the peak performance is not a good guidance on the choice of optimizers. In practice, we tend to take into account the complete trajectory and exert more emphasis on the early stage. Thus, we employ the Cumulative Performance-Early weighting scheme where λt ∝ (T − i), to compute λ-tunablity instead of the extreme assignment λt = 1t=T . The value obtained is termed as CPE for simplicity.
We present our evaluation protocol in Algorithm 1. As we can see, end-to-end training with hyperparameter optimization is conducted for various optimizers on the given task. The trajectory {Pt}Tt=1 is recorded to compute the peak performance as well as CPE value. Note that the procedure is repeated M times to obtain a reliable result. We use M = 3 in all experiments. More details of time cost and acceleration of the algorithm can be found in Appendix E.
Algorithm 1 End-to-End Efficiency Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A, feasible search space F
1: for o ∈ O do 2: for i = 1 to M do 3: Conduct hyperparameter search in F with the optimizer o using HyperBand on a 4: Record the performance trajectory {Pt}Tt=1 explored by HyperBand 5: Calculate the peak performance and CPE by Eq. 1 6: end for 7: Average peak and CPE values over M repetitions for the optimizer o 8: end for 9: Evaluate optimizers according to their peak and CPE values
3.2 DATA-ADDITION TRAINING EVALUATION PROTOCOL
In Scenario II, we have a service (e.g., a search or recommendation engine) and we want to re-train the model every day or every week with some newly added training data. One may argue that an online learning algorithm should be used in this case, but in practice online learning is unstable and industries still prefer this periodically retraining scheme which is more stable.
In this scenario, once the best hyperparameters were chosen in the beginning, we can reuse them for every training, so no hyperparameter tuning is required and the performance (including both efficiency and test accuracy) under the best hyperparameter becomes important. However, an implicit assumption made in this process is that “the best hyperparameter will still work when the training task slightly changes”. This can be viewed as transferability of hyperparameters for a particular optimizer, and our second evaluation protocol aims to evaluate this practical scenario.
We simulate data-addition training with all classification tasks, and the evaluation protocol works as follows: 1) Extract a subset Dδ containing partial training data from the original full dataset D with a small ratio δ; 2) Conduct a hyperparameter search on Dδ to find the best setting under this scenario; 3) Use these hyperparameters to train the model on the complete dataset; 4) Observe the potential change of the ranking of various optimizers before and after data addition. For step 4) when comparing different optimizers, we will plot the training curve in the full-data training stage in Section 4, and also summarize the training curve using the CPE value. The detailed evaluation protocol is described in Algorithm 2.
Algorithm 2 Data-Addition Training Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A with a full dataset D, a split ratio δ
1: for o ∈ O do 2: for i = 1 to M do 3: Conduct hyperparameter search with the optimizer o using Hyperband on awith a partial
dataset Dδ , and record the best hyperparameter setting Ωpartial found under this scenario 4: Apply the optimizer with Ωpartial on Dδ and D, then save the training curves 5: end for 6: Average training curves of o over M repetitions to compute CPE 7: end for 8: Compare performance of different optimizers under data-addition training
4 EXPERIMENTAL RESULTS
Optimizers to be evaluated. As shown in Table 1, we consider 7 optimizers including nonadaptive methods using only the first-order momentum, and adaptive methods considering both firstorder and second-order momentum. We also provide lists of tunable hyperparameters for different optimizers in Table 1. Moreover, we consider following two combinations of tunable hyperparameters to better investigate the performance of different optimizers: a) only tuning initial learning rate with the others set to default values and b) tuning a full list of hyperparameters. A detailed description of optimizers as well as default values and search range of these hyperparameters can be found in Appendix E. Note that we adopt a unified search space for a fair comparison following Metz et al. (2020), to eliminate biases of specific ranges for different optimizers. The tuning budget of Hyperband is determined by three items: maximum resource (in this paper we use epoch) per configuration R, reduction factor η, and number of configurations nc. According to Li et al. (2017), a single Hyperband execution contains ns = blogη(R)c + 1 of SuccessiveHalving, each referred to as a bracket. These brackets take strategies from least to most aggressive early-stopping, and each one is designed to use approximately B = R · ns resources, leading to a finite total budget. The number of randomly sampled configurations in one Hyperband run is also fixed and grows with R. Then given R and η, nc determines the repetition times of Hyperband. We set η = 3 as this default value performs consistently well, and R to a value which each task usually takes for a complete run. For nc, it is assigned as what is required for a single Hyperband execution for all tasks, except for BERT fine-tuning, where a larger number of configurations is necessary due to a relatively small R. In Appendix E, we give assigned values of R, η, and nc for each task.
Tasks for benchmarking. For a comprehensive and reliable assessment of optimizers, we consider a wide range of tasks in different domains. When evaluating end-to-end training efficiency, we implement our protocol on tasks covering several popular and promising applications in Table 2. Apart from common tasks in computer vision and natural language processing, we introduce two extra tasks in graph neural network training and reinforcement learning. For simplicity, we will use the dataset to represent each task in our subsequent tables of experimental results. (For the reinforcement learning task, we just use the environment name.) The detailed settings and parameters for each task can be found in Appendix D.
4.1 END-TO-END EFFICIENCY (SECNARIO I)
To evaluate end-to-end training efficiency, we adopt the protocol in Algorithm 1. Specifically, we record the average training trajectory with Hyperband {Pt}Tt=1 for each optimizer on benchmarking tasks, where Pt is the evaluation metric for each task (e.g., accuracy, return). We visualize these trajectories in Figure 3 for CIFAR10 and CIFAR100, and calculate CPE and peak performance in Table 3 and 9 respectively. More results for other tasks and the peak performance can be found in Appendix F. Besides, in Eq. 2 we compute performance ratio ro,a for each optimizer and each task, and then utilize the distribution function of a performance metric called performance profile ρo(τ) to summarize the performance of different optimizers over all the tasks. For tasks where a lower CPE is better, we just use ro,a = CPEo,a/min{CPE o,a} instead to guarantee ro,a ≥ 1. The function ρo(τ) for all optimizers is presented in Figure 4. Based on the definition of performance profile (Dolan & Moré, 2002), the optimizers with large probability ρo(τ) are to be preferred. In particular, the value of ρo(1) is the probability that one optimizer will win over the rest and can be a reference for selecting the proper optimizer for an unknown task. We also provided a probabilistic performance profile to summarize different optimizers in Figure 7 in Appendix F.
ro,a = max{CPEo,a : o ∈ O}
CPEo,a , ρo(τ) =
1 |A| size { a ∈ A : ro,a ≤ τ } . (2)
Our findings are summarized below:
• It should be emphasized from Table 3 and 9, that under our protocol based on Hyperband, SGD performs similarly to Adam in terms of efficiency as well as peak performance, and can even surpass it in some cases like training on CIFAR100. Under Hyperband, the best configuration of SGD is less tedious to find than random search because Hyperband can early-stop bad runs and thus they will affect less to the search efficiency and final performance. • For image classification tasks all the methods are competitive, while adaptive methods tend to perform better in more complicated tasks (NLP, GCN, RL). • There is no significant distinction among adaptive variants. Performance of adaptive optimizers tends to fall in the range within 1% of the best result. • According to performance profile in Figure 4, the RAdam achieves probability 1 with the smallest τ , and Adam is the second method achieving that. This indicates that RAdam and Adam are achieving relatively stable and consistent performance among these tasks.
4.2 DATA-ADDITION TRAINING (SCENARIO II)
We then conduct evaluation on data-addition training based on the protocol in Algorithm 2. We choose four classification problems on CIFAR10, CIFAR100, MRPC and PPI since the setting do not apply to RL. We search the best hyperparameter configuration, denoted by Ωpartial, under the sub training set with the ratio δ = 0.3. Here we tune all hyperparameters. Then we directly apply Ωpartial on the full dataset for a complete training process. The training curves are shown in Figure 5, and we also summarize the training curve with CPE by Eq. 1 in Table 4. We have the following findings:
• There is no clear winner in data-addition training. RAdam is outperforming other optimizers in 2/4 tasks so is slightly preferred, but other optimizers except Lookahead are also competitive (within 1% range) on at least 2/4 tasks. • To investigate whether the optimizer’s ranking will change when adding 70% data, we compare the training curve on the original 30% data versus the training curve on the full 100% data in Figure 5. We observe that the ranking of optimizers slightly changes after data addition.
5 CONCLUSIONS AND DISCUSSIONS
In conclusion, we found there is no strong evidence that newly proposed optimizers consistently outperform Adam, while each of them may be good for some particular tasks. When deciding the choice of the optimizer for a specific task, people can refer to results in Table 3 and 9. If the task is contained in Table 2, he/she can directly choose the one with the best CPE or best peak performance based on his/her goal of the task (easy to tune or high final performance). On the other hand, even though the desired task is covered, people can also gain some insights from the results of the most similar task in Table 2, or refer to the performance profile in Figure 4 to pick adaptive methods like Adam. Besides choosing the optimizer, it will contribute to designing a new optimizer as well. Using our protocol to evaluate a new optimizer can show whether it has obvious improvement over existing optimizers, and can serve as a routine to judge the performance of the optimizer thoroughly.
In addition to the proposed two evaluation criteria, there could be other factors that affect the practical performance of an optimizer. First, the memory consumption is becoming important for training large DNN models. For instance, although Lookahead performs well in certain tasks, it requires more memory than other optimizers, restricting their practical use in some memory constrained applications. Another important criterion is the scalability of optimizers. When training with a massively distributed system, optimizing the performance of a large batch regime (e.g., 32K batch size for ImageNet) is important. LARS and LAMB algorithms included in our study are developed for large batch training. We believe it is another important metric for comparing optimizers worth studying further.
A HYPERBAND
We present the whole algorithm for Hyperband in Algorithm 3, and you can refer to Li et al. (2017) for more details.
Algorithm 3 Hyperband Input: R, η Initialization: smax = blogη Rc , B = (smax + 1)R
1: for s ∈ {smax, smax − 1, . . . , 0} do 2: n = dBR ηs (s+1)e, r = Rη −s 3: // begin SuccessiveHalving with (n, r) inner loop 4: T = random get configuration(n) 5: for i ∈ {0, . . . , s} do 6: ni = bnη−ic, ri = rηi 7: L = {run then return val loss(t, ri): t ∈ T} 8: T = top k(T, L, bni/ηc) 9: end for
10: end for return Hyperparameter configuration with the smallest loss seen so far
B DETAILS OF HUMAN STUDY
In this human study, 10 participants are sampled from Ph.D. students with computer science backgrounds (machine learning backgrounds specifically). They are recruited as follows: We first asked the administrators to distribute the program of this human study to Ph.D. students in machine learning labs in our institutions. Provided the population, we assumed that they had prior knowledge about some basic machine learning experiments, such as image classification on MNIST and CIFAR10. They were requested to conduct hyperparameter tuning of learning rate based on their knowledge. They were informed that the target experiment was image classification on CIFAR10 with SGD, and the search range of learning rate was in the grid [1.0 × 10−8, 1.0 × 10−7, 1.0 × 10−6, . . . , 10]. Each configuration was run for 200 epochs at most. Moreover, we also told them that they could pause any configuration if they wanted to evaluate others, and even stop the whole tuning process only when they thought the accuracy could not be further improved and the number of total tuning epochs exceeded 600 at the same time. We collected results from 17 people and we
determined the validity by checking whether the length of the trajectory was greater than 600 and removed 7 invalidity trajectories. Finally there remained 10 trajectories and we averaged them as the human performance in Figure 2.
C OPTIMIZERS
Notations. Given a vector of parameters θ ∈ Rd, we denote a sub-vector of its i-th layer’s parameters by θ(i). {αt}Tt=1 is a sequence of learning rates during the optimization procedure of a horizon T . {φt, ψt}Tt=1 represents a sequence of functions to calculate the first-order and second-order momentum of the gradient gt, which aremt and vt respectively at time step t. Different optimization algorithms are usually specified by the choice of φ(·) and ψ(·). {rt}Tt=1 is an additional sequence of adaptive terms to modify the magnitude of the learning rate in some methods. For algorithms only using the first-order momentum, µ is the , while β1 and β2 are coefficients to compute the running averages m and v. is a small scalar (e.g., 1× 10−8) used to prevent division by 0. Generic optimization framework. Based on, we further develop a thorough generic optimization framework including an extra adaptive term in Algorithm 4. The debiasing term used in the original version of Adam is ignored for simplicity. Note that for {αt}Tt=1, different learning rate scheduling strategies can be adopted and the choice of scheduler is also regarded as a tunable hyperparameter. Without loss of generality, in this paper, we only consider a constant value and a linear decay (Shallue et al., 2018) in the following equation, introducing γ as a hyperparameter.
αt = { α0, constant; α0 − (1− γ)α0 tT , linear decay.
With this generic framework, we can summarize several popular optimization methods by explicitly specifying mt, vt and rt in Table 5. It should be clarified that Lookahead is an exception of the generic framework. In fact it is more like a high-level mechanism, which can be incorporated with any other optimizer. However, as stated in Zhang et al. (2019), this optimizer is robust to inner optimization algorithm, k, and αs in Algorithm 5, we still include Lookahead here with Adam as the base for a more convincing and comprehensive evaluation. We consider Lookahead as a special adaptive method, and tune the same hyperparamters for it as other adaptive optimziers.
Algorithm 4 Generic framework of optimization methods Input: parameter value θ1, learning rate with scheduling {αt}, sequence of functions {φt, ψt, χt}Tt=1 to compute mt, vt, and rt respectively.
1: for t = 1 to T do 2: gt = ∇ft(θt) 3: mt = φt(g1, · · · , gt) 4: vt = ψt(g1, · · · , gt) 5: rt = χt(θt,mt, vt) 6: θt+1 = θt − αtrtmt/ √ vt 7: end for
Algorithm 5 Lookahead Optimizer Input: Initial parameters θ0, objective function f , synchronization period k, slow weights step size αs, optimizer A
1: for t = 1, 2, . . . do 2: Synchronize parameters θ̂t,0 ← θt−1 3: for i = 1, 2, . . . , k do 4: Sample minibatch of data d ∈ D 5: θ̂t,i ← θ̂t,i−1 +A(L, θ̂t,i−1, d) 6: end for 7: Perform outer update θt ← θt−1 + αs(θ̂t,k − θt−1) 8: end for
return Parameters θ
D TASK DESCRIPTION
We make a concrete description of tasks selected for our optimizer evaluation protocol:
• Image classifcation. For this task, we adopt a ResNet-50 (He et al., 2016) model on CIFAR10 and CIFAR100 with a batch size of 128 and the maximum epoch of 200 per trial. • VAE. We use a vanilla variational autoencoder in Kingma & Welling (2013) with five convolutional and five deconvolutional layers with a latent space of dimension 128 on CelebA. There are no dropout layers. Trained with a batch size of 144. • GAN. We train SNGAN with the same network architecture and objective function with spectral normalization for CIFAR10 in Miyato et al. (2018), and the batch size of the generator and the discriminator is 128 and 64 respectively. • Natural language processing. In this domain, we finetune RoBERTa-base on MRPC, one of the test suit in GLUE benchmark. For each optimizer, we set the maximal exploration budget to be 800 epochs. The batch size is 16 sentences. • Graph learning. Among various graph learning problems, we choose node classification as semisupervised classification. In GCN training, in there are multiple ways to deal with the neighborhood explosion of stochastic optimizers. We choose Cluster-GCN Chiang et al. (2019) as the backbone to handle neighborhood expansion and PPI as the dataset. • Reinforcement learning. We select Walker2d-v3 from OpenAI Gym (Brockman et al. (2016)) as our training environment, and PPO (Schulman et al. (2017)), implemented by OpenAI SpinningUp (Achiam (2018)), as the algorithm that required tuning. We use the same architectures for both action value network Q and the policy network π. We define 40,000 of environment interactions as one epoch, with a batch size of 4,000. The return we used is the highest average test return of an epoch during the training.
E IMPLEMENTATION DETAILS
Implementation details of our experiments are provided in this section. Specifically, we give the unified search space for all hyperparamters and their default values in Table 6. Note that we tune the learning rate decay factor for image classification tasks when tuning every hyperparamter. For the task on MRPC, γ is tuned for all experiments. In other cases, we only tune original hyperparamters without a learning rate scheduler.
In addition, Hyperband parameter values for each task are listed in Table 7. These parameters are assigned based on properties of different tasks.
For time cost of our evaluation protocols, it depends on how many budgets are available. Specifically, in our paper, the unit of time budget is one epoch, then the total time will be Bepoch ∗ Tepoch, where Bepoch is the total available budget and Tepoch is the running time for one epoch. There is no additional computational cost, i.e., running our protocol once takes the same time as running one hyperparameter search with Hyperband. In our experiment on CIFAR10, we roughly evaluated 200 hyperparameter configurations in one Hyperband running, while the same time can only allow about 50 configurations in random search.
Moreover, we can further accelerate our evaluation protocol by resampling, shown in Algorithm 6. The basic idea is that we keep a library of different hyperparameter settings. At the beginning, the library is empty. And in each repetition, we sample a number of configurations required by running Hyperband once. During the simulation of Hyperband, we just retrieve the value from the library if the desired epoch of current configuration is contained in the library. Otherwise, we run this configuration based on Hyperband, and store the piece of the trajectory to the library.
Algorithm 6 Alternative End-to-End Efficiency Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A, search space F , library size S
1: for o ∈ O do 2: Sample S configurations from F , initialize the library with an empty list for each setting 3: for i = 1 to M do 4: Simulate Hyperband with o using configurations re-sampled from the library on a 5: if the desired accuracy is pre-computed in the library then 6: Retrieve the value directly 7: else 8: Training normally, and store values to the library 9: end if
10: Record the performance trajectory {Pt}Tt=1 explored by HyperBand 11: Calculate the peak performance and CPE by Eq. 1 12: end for 13: Average peak and CPE values over M repetitions for the optimizer o 14: end for 15: Evaluate optimizers according to their peak and CPE values
F ADDITIONAL RESULTS
More detailed experimental results are reported in this section.
F.1 IMPACT OF η
Since there is an extra hyperparameter, the reduction factor η in Hyperband, we conduct an experiment with different values (η = 2, 3, 4, 5) to observe the potential impact of this additional
hyperparameter on our evaluation. Specifically, we use Hyperband to tune the learning rate for three optimizers, SGD, Adam, and Lookahead on CIFAR10, and results are presented in Table 8. As we can see, although the change of η may lead to different CPE values, the relative ranking among three optimizers remains unchanged. Besides, they all achieve comparable peak performance at the end of training. Considering the efficiency of Hyperband, we choose η = 3 based on the convention in Li et al. (2017) in all our experiments.
F.2 END-TO-END TRAINING
Table 9 shows peak performance for optimizers on each task. For GAN, we only conduct evaluation on optimizers tuning learning rate due to time limit, and present its CPE and peak performance in Table 10. There is also an end-to-end training curve for GAN on CIFAR10 in Figure 6. Figures for end-to-end training curves on the rest of tasks are shown in Figure 9 and 10.
Optimizer GAN-CIFAR10↓ Tune learning rate only:
SGD 113.25 (50.08) Adam 77.04 (25.14)
RAdam 73.85 (19.61) Yogi 76.80 (25.36) LARS 157.71 (73.82) LAMB 68.47 (25.55)
Lookahead 65.61 (20.40)
Besides the general performance profile, we present a probabilistic version described in Barreto et al. (2010) in Figure 7 to account for randomness. The probabilistic performance profile can be formulated as
ρ̄o(τ) = 1 |A| ∑ a P(r̄o,a ≤ τ), r̄o,a ∼ N ( µo,a ba , σ2o,a b2a ), (3)
where µo,a and σo,a are the mean and standard deviation of CPE of the optimizer o on the task a respetively, and ba is just the best expectation of CPE on a among all optimizers. It can be seen in Figure 7 that the probabilistic performance profiles shows a similar trend to Figure 4.
We also attach two end-to-end training trajectories on CIFAR10 with the error in Figure 8. Since it is hard to distinguish optimizers with the standard deviation added, we just instead report the std of CPE and peak performance in Table 3 and 9.
F.3 DATA ADDITION TRAINING
We provide training curves for data-addition training on full MRPC and PPI dataset in Figure 11. | 1. What are the main contributions and key ideas presented in the reviewed paper?
2. How does the reviewer assess the significance and quality of the proposed protocols for evaluating and comparing optimizers?
3. What are the concerns or limitations mentioned by the reviewer regarding the choice of Hyperband as a guiding algorithm for hyperparameter search?
4. How does the reviewer suggest improving the discussion on the effectiveness of Hyperband or other alternatives in terminating bad runs early?
5. What additional information does the reviewer request regarding the practical usage and applicability of the proposed procedures for choosing an optimizer?
6. Why does the reviewer consider CPE a suitable metric for evaluation, and what potential issues might arise from focusing solely on peak performance? | Review | Review
This work proposes two protocols for evaluating and comparing the quality of different optimizers. It points out that some existing, commonly used ways of comparing optimizers may over- or under-represent the amount of time that hyperparameter search can take. To fix this, it proposes using the Hyperband algorithm to guide the hyperparameter search.
I think the authors make a convincing argument that existing approaches for comparing optimizers can downplay the role of hyperparameter search, which can be significant and can vary greatly across optimizers. I think the paper presents a fairly convincing approach for comparing optimizers, and thus for evaluating new ones against existing ones.
I find the argument that Hyperband is a good choice because it more closely resembles human behaviour somewhat weak. Instead I would be more convinced by something showing that Hyperband (or whatever alternative) does a good job of terminating bad runs early, since this is the point of using a bandit algorithm over random search.
I would also like to see some discussion of how others could use the proposed procedures when deciding which optimizer to choose for their task.
Finally, I think the authors could do a better job of explaining why CPE is the right metric to use (i.e., why is considering peak performance not a good choice?)
In general I like this work and recommend accepting it. However, I think it could be strengthened by more discussion of how this could be of use to the community in the future. This is especially important given that the no one optimizer seems to be universally best. |
ICLR | Title
How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers
Abstract
Many optimizers have been proposed for training deep neural networks, and they often have multiple hyperparameters, which make it tricky to benchmark their performance. In this work, we propose a new benchmarking protocol to evaluate both end-to-end efficiency (training a model from scratch without knowing the best hyperparameter) and data-addition training efficiency (the previously selected hyperparameters are used for periodically re-training the model with newly collected data). For end-to-end efficiency, unlike previous work that assumes random hyperparameter tuning, which over-emphasizes the tuning time, we propose to evaluate with a bandit hyperparameter tuning strategy. A human study is conducted to show that our evaluation protocol matches human tuning behavior better than the random search. For data-addition training, we propose a new protocol for assessing the hyperparameter sensitivity to data shift. We then apply the proposed benchmarking framework to 7 optimizers and various tasks, including computer vision, natural language processing, reinforcement learning, and graph mining. Our results show that there is no clear winner across all the tasks.
1 INTRODUCTION
Due to the enormous data size and non-convexity, stochastic optimization algorithms have become widely used in training deep neural networks. In addition to Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951), many variations such as Adagrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2014) have been proposed. Unlike classical, hyperparameter free optimizers such as gradient descent and Newton’s method1, stochastic optimizers often hold multiple hyperparameters including learning rate and momentum coefficients. Those hyperparameters are critical not only to the speed, but also to the final performance, and are often hard to tune.
It is thus non-trivial to benchmark and compare optimizers in deep neural network training. And a benchmarking mechanism that focuses on the peak performance could lead to a false sense of improvement when developing new optimizers without considering tuning efforts. In this paper, we aim to rethink the role of hyperparameter tuning in benchmarking optimizers and develop new benchmarking protocols to reflect their performance in practical tasks better. We then benchmark seven recently proposed and widely used optimizers and study their performance on a wide range of tasks. In the following, we will first briefly review the two existing benchmarking protocols, discuss their pros and cons, and then introduce our contributions.
Benchmarking performance under the best hyperparameters A majority of previous benchmarks and comparisons on optimizers are based on the best hyperparameters. Wilson et al. (2017); Shah et al. (2018) made a comparison of SGD-based methods against adaptive ones under their best hyperparameter configurations. They found that SGD can outperform adaptive methods on several datasets under careful tuning. Most of the benchmarking frameworks for ML training also assume knowing the best hyperparameters for optimizers (Schneider et al., 2019; Coleman et al., 2017; Zhu et al., 2018). Also, the popular MLPerf benchmark evaluated the performance of optimizers under the best hyperparameter. It showed that ImageNet and BERT could be trained in 1 minute using the combination of good optimizers, good hyperparameters, and thousands of accelerators.
1The step sizes of gradient descent and Newton’s method can be automatically adjusted by a line search procedure (Nocedal & Wright, 2006).
Despite each optimizer’s peak performance being evaluated, benchmarking under the best hyperparameters makes the comparison between optimizers unreliable and fails to reflect their practical performance. First, the assumption of knowing the best hyperparameter is unrealistic. In practice, it requires a lot of tuning efforts to find the best hyperparameter, and the tuning efficiency varies greatly for different optimizers. It is also tricky to define the “best hyperparameter”, which depends on the hyperparameter searching range and grids. Further, since many of these optimizers are sensitive to hyperparameters, some improvements reported for new optimizers may come from insufficient tuning for previous work.
Benchmarking performance with random hyperparameter search It has been pointed out in several papers that tuning hyperparameter needs to be considered in evaluating optimizers (Schneider et al., 2019; Asi & Duchi, 2019), but having a formal evaluation protocol on this topic is nontrivial. Only recently, two papers Choi et al. (2019) and Sivaprasad et al. (2020) take hyperparameter tuning time into account when comparing SGD with Adam/Adagrad. However, their comparisons among optimizers are conducted on random hyperparameter search. We argue that these comparisons could over-emphasize the role of hyperparameter tuning, which could lead to a pessimistic and impractical performance benchmarking for optimizers. This is due to the following reasons: First, in the random search comparison, each bad hyperparameter has to run fully (e.g., 200 epochs). In practice, a user can always stop the program early for bad hyperparameters if having a limited time budget. For instance, if the learning rate for SGD is too large, a user can easily observe that SGD diverges in a few iterations and directly stops the current job. Therefore, the random search hypothesis will over-emphasize the role of hyperparameter tuning and does not align with a real user’s practical efficiency. Second, the performance of the best hyperparameter is crucial for many applications. For example, in many real-world applications, we need to re-train the model every day or every week with newly added data. So the best hyperparameter selected in the beginning might benefit all these re-train tasks rather than searching parameters from scratch. In addition, due to the expensive random search, random search based evaluation often focuses on the low-accuracy region2, while practically we care about the performance for getting reasonably good accuracy.
Our contributions Given that hyperparameter tuning is either under-emphasized (assuming the best hyperparameters) or over-emphasize (assuming random search) in existing benchmarking protocols and comparisons, we develop new evaluation protocols to compare optimizers to reflect the real use cases better. Our evaluation framework includes two protocols. First, to evaluate the end-to-end training efficiency for a user to train the best model from scratch, we develop an efficient evaluation protocol to compare the accuracy obtained under various time budgets, including the hyperparameter tuning time. Instead of using the random search algorithm, we adopt the Hyperband (Li et al., 2017) algorithm for hyperparameter tuning since it can stop early for bad configurations and better reflect the real running time required by a user. Further, we also propose to evaluate the data addition training efficiency for a user to re-train the model with some newly added training data, with the knowledge of the best hyperparameter tuned in the previous training set. We also conduct human studies to study how machine learning researchers are tuning parameters in optimizers and how that aligns with our proposed protocols.
Based on the proposed evaluation protocols, we study how much progress has recently proposed algorithms made compared with SGD or Adam. Note that most of the recent proposed optimizers have shown outperforming SGD and Adam under the best hyperparameters for some particular tasks, but it is not clear whether the improvements are still significant when considering hyper-parameter tuning, and across various tasks. To this end, we conduct comprehensive experiments comparing state-of-the-art training algorithms, including SGD (Robbins & Monro, 1951), Adam (Kingma & Ba, 2014), RAdam (Liu et al., 2019), Yogi (Zaheer et al., 2018), LARS (You et al., 2017), LAMB (You et al., 2019), and Lookahead (Zhang et al., 2019), on a variety of training tasks including image classification, generated adversarial networks (GANs), sentence classification (BERT fine-tuning), reinforcement learning and graph neural network training. Our main conclusions are: 1) On CIFAR-10 and CIFAR-100, all the optimizers including SGD are competitive. 2) Adaptive methods are generally better on more complex tasks (NLP, GCN, RL). 3) There is no clear winner among adaptive methods. Although RAdam is more stable than Adam across tasks, Adam is still a very competitive baseline even compared with recently proposed methods.
2For instance, Sivaprasad et al. (2020) only reaches < 50% accuracy in their CIFAR-100 comparisons.
2 RELATED WORK
Optimizers. Properties of deep learning make it natural to apply stochastic first order methods, such as Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951). Severe issues such as a zig-zag training trajectory and a uniform learning rate have been exposed, and researchers have then drawn extensive attention to modify the existing SGD for improvement. Along this line of work, tremendous progresses have been made including SGDM (Qian, 1999), Adagrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014). These methods utilize momentums to stabilize and speed up training procedures. Particularly, Adam is regarded as the default algorithm due to its outstanding compatibility. Then variants such as Amsgrad (Reddi et al., 2019), Adabound (Luo et al., 2019), Yogi (Zaheer et al., 2018), and RAdam (Liu et al., 2019) have been proposed to resolve different drawbacks of Adam. Meanwhile, the requirement of large batch training has inspired the development of LARS (You et al., 2017) and LAMB (You et al., 2019). Moreover, Zhang et al. (2019) has put forward a framework called Lookahead to boost optimization performance by iteratively updating two sets of weights.
Hyperparameter tuning methods. Random search and grid search (Bergstra & Bengio, 2012) can be a basic hyperparameter tuning method in the literature. However, the inefficiency of these methods stimulates the development of more advanced search strategies. Bayesian optimization methods including Bergstra et al. (2011) and Hutter et al. (2011) accelerate random search by fitting a black-box function of hyperparameter and the expected objective to adaptively guide the search direction. Parallel to this line of work, Hyperband (Li et al., 2017) focuses on reducing evaluation cost for each configuration and early terminates relatively worse trials. Falkner et al. (2018) proposes BOHB to combine the benefits of both Bayesian Optimization and Hyperband. All these methods still require huge computation resources. A recent work (Metz et al., 2020) has tried to obtain a list of potential hyperparameters by meta-learning from thousands of representative tasks. We strike a balance between effectiveness and computing cost and leverage Hyperband in our evaluation protocol to compare a wider range of optimizers.
3 PROPOSED EVALUATION PROTOCOLS
In this section, we introduce the proposed evaluation framework for optimizers. We consider two evaluation protocols, each corresponding to an important training scenario:
• Scenario I (End-to-end training): This is the general training scenario, where a user is given an unfamiliar optimizer and task, the goal is to achieve the best validation performance after several trials and errors. In this case, the evaluation needs to include hyperparameter tuning time. We develop an efficiency evaluation protocol to compare various optimizers in terms of CPE and peak performance. • Scenario II (Data-addition training): This is another useful scenario encountered in many applications, where the same model needs to be retrained regularly after collecting some fresh data. In this case, a naive solution is to reuse the previously optimal hyperparameters and retrain the model. However, since the distribution is shifted, the result depends on the sensitivity to that shift.
We describe the detailed evaluation protocol for each setting in the following subsections.
3.1 END-TO-END TRAINING EVALUATION PROTOCOL
Before introducing our evaluation protocol for Scenario I, we first formally define the concept of optimizer and its hyperparameters. Definition 1. An optimizer is employed to solve a minimization problem minθ L(θ) and can be defined by a tuple o ∈ O = (U ,Ω), where O contains all types of optimizers. U is a specific update rule and Ω = (ω1, . . . , ωN ) ∈ RN represents a vector ofN hyperparameters. Search space of these hyperparameters is denoted by F . Given an initial parameter value θ0, together with a trajectory of optimization procedure Ht = {θs,L(θs),∇L(θs)}, the optimizer updates θ by
θt+1 = U(Ht,Ω).
We aim to evaluate the end-to-end time for a user to get the best model, including the hyperparameter tuning time. A recent work (Sivaprasad et al., 2020) assumes that a user conducts random search for finding the best hyperparameter setting. Still, we argue that the random search procedure will over-emphasize the importance of hyperparameters when tuning is considered — it assumes a user
never stops the training even if they observe divergence or bad results in the initial training phase, which is unrealistic.
Figure 1 illustrates why random search might not lead to a fair comparison of optimizers. In Figure 1, we are given two optimizers, A and B, and their corresponding loss w.r.t. hyperparameter. According to Sivaprasad et al. (2020), optimizer B is considered better than optimizer A under a constrained budget since most regions of the hyperparameter space of A outperforms B. For instance, suppose we randomly sample the same hyperparamter setting for A and B. The final config ω∗r (B) found under this strategy can have a lower expected loss than that of ω∗r (A), as shown in Figure 1a. However, there exists a more practical search strategy which can invalidate this statement with the assumption of a limited searching budget: a user can early terminate a configuration trial when trapped in bad results or diverging. Hence, we can observe in Figure 1b that for optimizer A, this strategy earlystops many configurations and only allow a limited number of trials to explore to the deeper stage. Therefore, the bad hyperparameters will not affect the overall efficiency of Optimizer A too much. In contrast, for optimizer B, performances of different hyperparameters are relatively satisfactory and hard to distinguish, resulting in similar and long termination time for each trial. Therefore, it may be easier for a practical search strategy p to find the best configuration ω∗p(A) of optimizer A than ω∗p(B), given the same constrained budget.
This example suggests that random search may over-emphasize the parameter sensitivity when benchmarking optimizers. To better reflect a practical hyperparameter tuning scenario, our evaluation assumes a user applies Hyperband (Li et al., 2017), a simple but effective hyperparameter tuning scheme to get the best model. Hyperband formulates hyperparameter optimization as a unique bandit problem. It accelerates random search through adaptive resource allocation and earlystopping, as demonstrated in Figure 1b. Compared with its more complicated counterparts such as BOHB (Falkner et al., 2018), Hyperband requires less computing resources and performs similarly within a constrained budget. The algorithm is presented in Appendix A.
Despite different hyperparameter tuning algorithms, human tuning by experts is still regarded as the most effective. To verify and support that Hyperband is an effective method and even competitive with humans, we conduct a human study as follows: for image classification on CIFAR10, given 10 learning rate configurations of SGD in the grid [1.0 × 10−8, 1.0 × 10−7, 1.0 × 10−6, . . . , 10], participants are requested to search the best one at their discretion. Namely, they can stop or pause a trial any time and continue to evaluate a new configuration until they feel it has already reached the best performance. 10 participants are sampled randomly from Ph.D. students with computer science backgrounds.
We collect their tuning trajectories and average them as human performance, which is considered “optimal” in this human study. In Figure 2, we plot curves for hyperparameter tuning of human, Hyperband, random search, random search with an early stopping (ES) strategy in Sivaprasad et al. (2020), and Hyperband with ES. We find that Hyperband matches humans’ behavior better, while random search tends to trap in suboptimal configurations although random with early stopping can mitigate this issue to some extent. This finding shows the advantage of Hyperband over random
search regardless of early stopping, and justifies the use of Hyperband in optimizer benchmarking. More details of this human study can be found in Appedix B.
With Hyperband incorporated in end-to-end training, we assume that each configuration is run sequentially and record the best performance obtained at time step t as Pt. Specifically, Pt represents the evaluation metric for each task, e.g., accuracy for image classification and return for reinforcement learning. {Pt}Tt=1 forms a trajectory for plotting learning curves on test set like Figure 3. Although it is intuitive to observe the performance of different optimizers according to such figures, summarizing a learning curve into a quantifiable, scalar value can be more insightful for evaluation. Thus, as shown in Eq. 1, we use λ-tunability defined in Sivaprasad et al. (2020) to further measure the performance of optimizers:
λ-tunability = ∑T
t=1 λt · Pt, where ∑ t λt = 1 and ∀tλt > 0. (1)
One intuitive way is to set λt = 1t=T to determine which optimizer can reach the best model performance after the whole training procedure. However, merely considering the peak performance is not a good guidance on the choice of optimizers. In practice, we tend to take into account the complete trajectory and exert more emphasis on the early stage. Thus, we employ the Cumulative Performance-Early weighting scheme where λt ∝ (T − i), to compute λ-tunablity instead of the extreme assignment λt = 1t=T . The value obtained is termed as CPE for simplicity.
We present our evaluation protocol in Algorithm 1. As we can see, end-to-end training with hyperparameter optimization is conducted for various optimizers on the given task. The trajectory {Pt}Tt=1 is recorded to compute the peak performance as well as CPE value. Note that the procedure is repeated M times to obtain a reliable result. We use M = 3 in all experiments. More details of time cost and acceleration of the algorithm can be found in Appendix E.
Algorithm 1 End-to-End Efficiency Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A, feasible search space F
1: for o ∈ O do 2: for i = 1 to M do 3: Conduct hyperparameter search in F with the optimizer o using HyperBand on a 4: Record the performance trajectory {Pt}Tt=1 explored by HyperBand 5: Calculate the peak performance and CPE by Eq. 1 6: end for 7: Average peak and CPE values over M repetitions for the optimizer o 8: end for 9: Evaluate optimizers according to their peak and CPE values
3.2 DATA-ADDITION TRAINING EVALUATION PROTOCOL
In Scenario II, we have a service (e.g., a search or recommendation engine) and we want to re-train the model every day or every week with some newly added training data. One may argue that an online learning algorithm should be used in this case, but in practice online learning is unstable and industries still prefer this periodically retraining scheme which is more stable.
In this scenario, once the best hyperparameters were chosen in the beginning, we can reuse them for every training, so no hyperparameter tuning is required and the performance (including both efficiency and test accuracy) under the best hyperparameter becomes important. However, an implicit assumption made in this process is that “the best hyperparameter will still work when the training task slightly changes”. This can be viewed as transferability of hyperparameters for a particular optimizer, and our second evaluation protocol aims to evaluate this practical scenario.
We simulate data-addition training with all classification tasks, and the evaluation protocol works as follows: 1) Extract a subset Dδ containing partial training data from the original full dataset D with a small ratio δ; 2) Conduct a hyperparameter search on Dδ to find the best setting under this scenario; 3) Use these hyperparameters to train the model on the complete dataset; 4) Observe the potential change of the ranking of various optimizers before and after data addition. For step 4) when comparing different optimizers, we will plot the training curve in the full-data training stage in Section 4, and also summarize the training curve using the CPE value. The detailed evaluation protocol is described in Algorithm 2.
Algorithm 2 Data-Addition Training Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A with a full dataset D, a split ratio δ
1: for o ∈ O do 2: for i = 1 to M do 3: Conduct hyperparameter search with the optimizer o using Hyperband on awith a partial
dataset Dδ , and record the best hyperparameter setting Ωpartial found under this scenario 4: Apply the optimizer with Ωpartial on Dδ and D, then save the training curves 5: end for 6: Average training curves of o over M repetitions to compute CPE 7: end for 8: Compare performance of different optimizers under data-addition training
4 EXPERIMENTAL RESULTS
Optimizers to be evaluated. As shown in Table 1, we consider 7 optimizers including nonadaptive methods using only the first-order momentum, and adaptive methods considering both firstorder and second-order momentum. We also provide lists of tunable hyperparameters for different optimizers in Table 1. Moreover, we consider following two combinations of tunable hyperparameters to better investigate the performance of different optimizers: a) only tuning initial learning rate with the others set to default values and b) tuning a full list of hyperparameters. A detailed description of optimizers as well as default values and search range of these hyperparameters can be found in Appendix E. Note that we adopt a unified search space for a fair comparison following Metz et al. (2020), to eliminate biases of specific ranges for different optimizers. The tuning budget of Hyperband is determined by three items: maximum resource (in this paper we use epoch) per configuration R, reduction factor η, and number of configurations nc. According to Li et al. (2017), a single Hyperband execution contains ns = blogη(R)c + 1 of SuccessiveHalving, each referred to as a bracket. These brackets take strategies from least to most aggressive early-stopping, and each one is designed to use approximately B = R · ns resources, leading to a finite total budget. The number of randomly sampled configurations in one Hyperband run is also fixed and grows with R. Then given R and η, nc determines the repetition times of Hyperband. We set η = 3 as this default value performs consistently well, and R to a value which each task usually takes for a complete run. For nc, it is assigned as what is required for a single Hyperband execution for all tasks, except for BERT fine-tuning, where a larger number of configurations is necessary due to a relatively small R. In Appendix E, we give assigned values of R, η, and nc for each task.
Tasks for benchmarking. For a comprehensive and reliable assessment of optimizers, we consider a wide range of tasks in different domains. When evaluating end-to-end training efficiency, we implement our protocol on tasks covering several popular and promising applications in Table 2. Apart from common tasks in computer vision and natural language processing, we introduce two extra tasks in graph neural network training and reinforcement learning. For simplicity, we will use the dataset to represent each task in our subsequent tables of experimental results. (For the reinforcement learning task, we just use the environment name.) The detailed settings and parameters for each task can be found in Appendix D.
4.1 END-TO-END EFFICIENCY (SECNARIO I)
To evaluate end-to-end training efficiency, we adopt the protocol in Algorithm 1. Specifically, we record the average training trajectory with Hyperband {Pt}Tt=1 for each optimizer on benchmarking tasks, where Pt is the evaluation metric for each task (e.g., accuracy, return). We visualize these trajectories in Figure 3 for CIFAR10 and CIFAR100, and calculate CPE and peak performance in Table 3 and 9 respectively. More results for other tasks and the peak performance can be found in Appendix F. Besides, in Eq. 2 we compute performance ratio ro,a for each optimizer and each task, and then utilize the distribution function of a performance metric called performance profile ρo(τ) to summarize the performance of different optimizers over all the tasks. For tasks where a lower CPE is better, we just use ro,a = CPEo,a/min{CPE o,a} instead to guarantee ro,a ≥ 1. The function ρo(τ) for all optimizers is presented in Figure 4. Based on the definition of performance profile (Dolan & Moré, 2002), the optimizers with large probability ρo(τ) are to be preferred. In particular, the value of ρo(1) is the probability that one optimizer will win over the rest and can be a reference for selecting the proper optimizer for an unknown task. We also provided a probabilistic performance profile to summarize different optimizers in Figure 7 in Appendix F.
ro,a = max{CPEo,a : o ∈ O}
CPEo,a , ρo(τ) =
1 |A| size { a ∈ A : ro,a ≤ τ } . (2)
Our findings are summarized below:
• It should be emphasized from Table 3 and 9, that under our protocol based on Hyperband, SGD performs similarly to Adam in terms of efficiency as well as peak performance, and can even surpass it in some cases like training on CIFAR100. Under Hyperband, the best configuration of SGD is less tedious to find than random search because Hyperband can early-stop bad runs and thus they will affect less to the search efficiency and final performance. • For image classification tasks all the methods are competitive, while adaptive methods tend to perform better in more complicated tasks (NLP, GCN, RL). • There is no significant distinction among adaptive variants. Performance of adaptive optimizers tends to fall in the range within 1% of the best result. • According to performance profile in Figure 4, the RAdam achieves probability 1 with the smallest τ , and Adam is the second method achieving that. This indicates that RAdam and Adam are achieving relatively stable and consistent performance among these tasks.
4.2 DATA-ADDITION TRAINING (SCENARIO II)
We then conduct evaluation on data-addition training based on the protocol in Algorithm 2. We choose four classification problems on CIFAR10, CIFAR100, MRPC and PPI since the setting do not apply to RL. We search the best hyperparameter configuration, denoted by Ωpartial, under the sub training set with the ratio δ = 0.3. Here we tune all hyperparameters. Then we directly apply Ωpartial on the full dataset for a complete training process. The training curves are shown in Figure 5, and we also summarize the training curve with CPE by Eq. 1 in Table 4. We have the following findings:
• There is no clear winner in data-addition training. RAdam is outperforming other optimizers in 2/4 tasks so is slightly preferred, but other optimizers except Lookahead are also competitive (within 1% range) on at least 2/4 tasks. • To investigate whether the optimizer’s ranking will change when adding 70% data, we compare the training curve on the original 30% data versus the training curve on the full 100% data in Figure 5. We observe that the ranking of optimizers slightly changes after data addition.
5 CONCLUSIONS AND DISCUSSIONS
In conclusion, we found there is no strong evidence that newly proposed optimizers consistently outperform Adam, while each of them may be good for some particular tasks. When deciding the choice of the optimizer for a specific task, people can refer to results in Table 3 and 9. If the task is contained in Table 2, he/she can directly choose the one with the best CPE or best peak performance based on his/her goal of the task (easy to tune or high final performance). On the other hand, even though the desired task is covered, people can also gain some insights from the results of the most similar task in Table 2, or refer to the performance profile in Figure 4 to pick adaptive methods like Adam. Besides choosing the optimizer, it will contribute to designing a new optimizer as well. Using our protocol to evaluate a new optimizer can show whether it has obvious improvement over existing optimizers, and can serve as a routine to judge the performance of the optimizer thoroughly.
In addition to the proposed two evaluation criteria, there could be other factors that affect the practical performance of an optimizer. First, the memory consumption is becoming important for training large DNN models. For instance, although Lookahead performs well in certain tasks, it requires more memory than other optimizers, restricting their practical use in some memory constrained applications. Another important criterion is the scalability of optimizers. When training with a massively distributed system, optimizing the performance of a large batch regime (e.g., 32K batch size for ImageNet) is important. LARS and LAMB algorithms included in our study are developed for large batch training. We believe it is another important metric for comparing optimizers worth studying further.
A HYPERBAND
We present the whole algorithm for Hyperband in Algorithm 3, and you can refer to Li et al. (2017) for more details.
Algorithm 3 Hyperband Input: R, η Initialization: smax = blogη Rc , B = (smax + 1)R
1: for s ∈ {smax, smax − 1, . . . , 0} do 2: n = dBR ηs (s+1)e, r = Rη −s 3: // begin SuccessiveHalving with (n, r) inner loop 4: T = random get configuration(n) 5: for i ∈ {0, . . . , s} do 6: ni = bnη−ic, ri = rηi 7: L = {run then return val loss(t, ri): t ∈ T} 8: T = top k(T, L, bni/ηc) 9: end for
10: end for return Hyperparameter configuration with the smallest loss seen so far
B DETAILS OF HUMAN STUDY
In this human study, 10 participants are sampled from Ph.D. students with computer science backgrounds (machine learning backgrounds specifically). They are recruited as follows: We first asked the administrators to distribute the program of this human study to Ph.D. students in machine learning labs in our institutions. Provided the population, we assumed that they had prior knowledge about some basic machine learning experiments, such as image classification on MNIST and CIFAR10. They were requested to conduct hyperparameter tuning of learning rate based on their knowledge. They were informed that the target experiment was image classification on CIFAR10 with SGD, and the search range of learning rate was in the grid [1.0 × 10−8, 1.0 × 10−7, 1.0 × 10−6, . . . , 10]. Each configuration was run for 200 epochs at most. Moreover, we also told them that they could pause any configuration if they wanted to evaluate others, and even stop the whole tuning process only when they thought the accuracy could not be further improved and the number of total tuning epochs exceeded 600 at the same time. We collected results from 17 people and we
determined the validity by checking whether the length of the trajectory was greater than 600 and removed 7 invalidity trajectories. Finally there remained 10 trajectories and we averaged them as the human performance in Figure 2.
C OPTIMIZERS
Notations. Given a vector of parameters θ ∈ Rd, we denote a sub-vector of its i-th layer’s parameters by θ(i). {αt}Tt=1 is a sequence of learning rates during the optimization procedure of a horizon T . {φt, ψt}Tt=1 represents a sequence of functions to calculate the first-order and second-order momentum of the gradient gt, which aremt and vt respectively at time step t. Different optimization algorithms are usually specified by the choice of φ(·) and ψ(·). {rt}Tt=1 is an additional sequence of adaptive terms to modify the magnitude of the learning rate in some methods. For algorithms only using the first-order momentum, µ is the , while β1 and β2 are coefficients to compute the running averages m and v. is a small scalar (e.g., 1× 10−8) used to prevent division by 0. Generic optimization framework. Based on, we further develop a thorough generic optimization framework including an extra adaptive term in Algorithm 4. The debiasing term used in the original version of Adam is ignored for simplicity. Note that for {αt}Tt=1, different learning rate scheduling strategies can be adopted and the choice of scheduler is also regarded as a tunable hyperparameter. Without loss of generality, in this paper, we only consider a constant value and a linear decay (Shallue et al., 2018) in the following equation, introducing γ as a hyperparameter.
αt = { α0, constant; α0 − (1− γ)α0 tT , linear decay.
With this generic framework, we can summarize several popular optimization methods by explicitly specifying mt, vt and rt in Table 5. It should be clarified that Lookahead is an exception of the generic framework. In fact it is more like a high-level mechanism, which can be incorporated with any other optimizer. However, as stated in Zhang et al. (2019), this optimizer is robust to inner optimization algorithm, k, and αs in Algorithm 5, we still include Lookahead here with Adam as the base for a more convincing and comprehensive evaluation. We consider Lookahead as a special adaptive method, and tune the same hyperparamters for it as other adaptive optimziers.
Algorithm 4 Generic framework of optimization methods Input: parameter value θ1, learning rate with scheduling {αt}, sequence of functions {φt, ψt, χt}Tt=1 to compute mt, vt, and rt respectively.
1: for t = 1 to T do 2: gt = ∇ft(θt) 3: mt = φt(g1, · · · , gt) 4: vt = ψt(g1, · · · , gt) 5: rt = χt(θt,mt, vt) 6: θt+1 = θt − αtrtmt/ √ vt 7: end for
Algorithm 5 Lookahead Optimizer Input: Initial parameters θ0, objective function f , synchronization period k, slow weights step size αs, optimizer A
1: for t = 1, 2, . . . do 2: Synchronize parameters θ̂t,0 ← θt−1 3: for i = 1, 2, . . . , k do 4: Sample minibatch of data d ∈ D 5: θ̂t,i ← θ̂t,i−1 +A(L, θ̂t,i−1, d) 6: end for 7: Perform outer update θt ← θt−1 + αs(θ̂t,k − θt−1) 8: end for
return Parameters θ
D TASK DESCRIPTION
We make a concrete description of tasks selected for our optimizer evaluation protocol:
• Image classifcation. For this task, we adopt a ResNet-50 (He et al., 2016) model on CIFAR10 and CIFAR100 with a batch size of 128 and the maximum epoch of 200 per trial. • VAE. We use a vanilla variational autoencoder in Kingma & Welling (2013) with five convolutional and five deconvolutional layers with a latent space of dimension 128 on CelebA. There are no dropout layers. Trained with a batch size of 144. • GAN. We train SNGAN with the same network architecture and objective function with spectral normalization for CIFAR10 in Miyato et al. (2018), and the batch size of the generator and the discriminator is 128 and 64 respectively. • Natural language processing. In this domain, we finetune RoBERTa-base on MRPC, one of the test suit in GLUE benchmark. For each optimizer, we set the maximal exploration budget to be 800 epochs. The batch size is 16 sentences. • Graph learning. Among various graph learning problems, we choose node classification as semisupervised classification. In GCN training, in there are multiple ways to deal with the neighborhood explosion of stochastic optimizers. We choose Cluster-GCN Chiang et al. (2019) as the backbone to handle neighborhood expansion and PPI as the dataset. • Reinforcement learning. We select Walker2d-v3 from OpenAI Gym (Brockman et al. (2016)) as our training environment, and PPO (Schulman et al. (2017)), implemented by OpenAI SpinningUp (Achiam (2018)), as the algorithm that required tuning. We use the same architectures for both action value network Q and the policy network π. We define 40,000 of environment interactions as one epoch, with a batch size of 4,000. The return we used is the highest average test return of an epoch during the training.
E IMPLEMENTATION DETAILS
Implementation details of our experiments are provided in this section. Specifically, we give the unified search space for all hyperparamters and their default values in Table 6. Note that we tune the learning rate decay factor for image classification tasks when tuning every hyperparamter. For the task on MRPC, γ is tuned for all experiments. In other cases, we only tune original hyperparamters without a learning rate scheduler.
In addition, Hyperband parameter values for each task are listed in Table 7. These parameters are assigned based on properties of different tasks.
For time cost of our evaluation protocols, it depends on how many budgets are available. Specifically, in our paper, the unit of time budget is one epoch, then the total time will be Bepoch ∗ Tepoch, where Bepoch is the total available budget and Tepoch is the running time for one epoch. There is no additional computational cost, i.e., running our protocol once takes the same time as running one hyperparameter search with Hyperband. In our experiment on CIFAR10, we roughly evaluated 200 hyperparameter configurations in one Hyperband running, while the same time can only allow about 50 configurations in random search.
Moreover, we can further accelerate our evaluation protocol by resampling, shown in Algorithm 6. The basic idea is that we keep a library of different hyperparameter settings. At the beginning, the library is empty. And in each repetition, we sample a number of configurations required by running Hyperband once. During the simulation of Hyperband, we just retrieve the value from the library if the desired epoch of current configuration is contained in the library. Otherwise, we run this configuration based on Hyperband, and store the piece of the trajectory to the library.
Algorithm 6 Alternative End-to-End Efficiency Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A, search space F , library size S
1: for o ∈ O do 2: Sample S configurations from F , initialize the library with an empty list for each setting 3: for i = 1 to M do 4: Simulate Hyperband with o using configurations re-sampled from the library on a 5: if the desired accuracy is pre-computed in the library then 6: Retrieve the value directly 7: else 8: Training normally, and store values to the library 9: end if
10: Record the performance trajectory {Pt}Tt=1 explored by HyperBand 11: Calculate the peak performance and CPE by Eq. 1 12: end for 13: Average peak and CPE values over M repetitions for the optimizer o 14: end for 15: Evaluate optimizers according to their peak and CPE values
F ADDITIONAL RESULTS
More detailed experimental results are reported in this section.
F.1 IMPACT OF η
Since there is an extra hyperparameter, the reduction factor η in Hyperband, we conduct an experiment with different values (η = 2, 3, 4, 5) to observe the potential impact of this additional
hyperparameter on our evaluation. Specifically, we use Hyperband to tune the learning rate for three optimizers, SGD, Adam, and Lookahead on CIFAR10, and results are presented in Table 8. As we can see, although the change of η may lead to different CPE values, the relative ranking among three optimizers remains unchanged. Besides, they all achieve comparable peak performance at the end of training. Considering the efficiency of Hyperband, we choose η = 3 based on the convention in Li et al. (2017) in all our experiments.
F.2 END-TO-END TRAINING
Table 9 shows peak performance for optimizers on each task. For GAN, we only conduct evaluation on optimizers tuning learning rate due to time limit, and present its CPE and peak performance in Table 10. There is also an end-to-end training curve for GAN on CIFAR10 in Figure 6. Figures for end-to-end training curves on the rest of tasks are shown in Figure 9 and 10.
Optimizer GAN-CIFAR10↓ Tune learning rate only:
SGD 113.25 (50.08) Adam 77.04 (25.14)
RAdam 73.85 (19.61) Yogi 76.80 (25.36) LARS 157.71 (73.82) LAMB 68.47 (25.55)
Lookahead 65.61 (20.40)
Besides the general performance profile, we present a probabilistic version described in Barreto et al. (2010) in Figure 7 to account for randomness. The probabilistic performance profile can be formulated as
ρ̄o(τ) = 1 |A| ∑ a P(r̄o,a ≤ τ), r̄o,a ∼ N ( µo,a ba , σ2o,a b2a ), (3)
where µo,a and σo,a are the mean and standard deviation of CPE of the optimizer o on the task a respetively, and ba is just the best expectation of CPE on a among all optimizers. It can be seen in Figure 7 that the probabilistic performance profiles shows a similar trend to Figure 4.
We also attach two end-to-end training trajectories on CIFAR10 with the error in Figure 8. Since it is hard to distinguish optimizers with the standard deviation added, we just instead report the std of CPE and peak performance in Table 3 and 9.
F.3 DATA ADDITION TRAINING
We provide training curves for data-addition training on full MRPC and PPI dataset in Figure 11. | 1. What are the strengths and weaknesses of the proposed benchmarking protocol for deep learning optimizers?
2. How does HyperBand compare to random search in terms of its ability to resemble human tuning behavior?
3. What are some potential biases introduced by using HyperBand instead of random search?
4. How do the results of the human study provided in the paper impact the confidence in the central claim of the paper?
5. How does the cost of HyperBand compare to the cost of random search with early stopping?
6. Is there any direct evidence that HyperBand is better than the protocol used by Sivaprasad et al. (2020)?
7. How does the choice of default values for HyperBand's hyperparameters impact the evaluation process?
8. Can we ensure that the unified search space used in the paper doesn't contain biases toward certain optimizers?
9. What is the variance of the average CPE and peak performance values?
10. What is meant by "expected performance under different time budgets" in the description of Scenario I? | Review | Review
The score does not represent the initial review. It was updated following the discussions below.
The paper proposes a new benchmarking protocol for optimizers in deep learning. The main argument is that previous papers have either neglected hyperparameter tuning or have employed hyperparameter search method that is far from how humans tune hyperparameters in practice. To mitigate the latter, the paper proposes to use HyperBand for automatic hyperparameter search. They show through a human study that HyperBand resembles human tuning performance more closely than random search. They then evaluate several optimizers on a multitude of tasks, including many recently proposed methods, under two scenarios: 1. Given an unfamiliar task, the effectiveness of each optimizer is measured through a metric that is cognizant of the hyperparameter tuning time. 2. After initially tuning on a subset of the dataset, how well does the obtained best hyperparameter configuration transfer to the full dataset? The main result is that the recently proposed optimizers are not substantially better than Adam.
By independently comparing recently proposed optimizers in two realistic scenarios, the paper would constitute a valuable contribution to the machine learning community. However, listed below, I take several issues that negatively impact my confidence in the protocol. If these issues could be resolved by answering my questions, I am inclined to update my rating upwards.
Major questions:
The main argument for replacing random search with HyperBand is that HyperBand resembles human tuning behavior much more closely. You provide the results of a human study as evidence for this central claim. However, the paper provides little detail on the nature of the study. How many participants did you have? How were they sampled? What was the expertise of the participants? If they happen to be familiar with computer vision, there is a good chance that they have trained on CIFAR10 before, thus already knowing good hyperparameter values. This would contradict the scenario that you assume in your benchmark, which is unfamiliarity with the task. Non-experts in computer vision would likely take a lot longer to tune CIFAR10 to good performance, perhaps more closely resembling the curve of random search. In its current state, I have little confidence in this human study.
You claim that in the evaluation protocol of Sivaprasad et al. (2020) each bad hyperparameter has to run fully. This is not true, because the protocol incorporates early stopping after two successive epochs in which the validation performance doesn't improve, thereby preventing at least very bad configurations. It is obvious that HyperBand is a more sophisticated solution, but there is no direct evidence that it is so much better that it justifies replacing Sivaprasad et al. (2020)'s protocol. Perhaps you could include random search with a simple early stopping criterion in Figure 2?
In Algorithm 1, the performance trajectory is computed M times, and you average the peak and CPE values over all repetitions, which is good to account for stochasticity. But you never mention how large M is. Is the cost of HyperBand low enough to allow for a sufficiently large M? Would it instead be possible to compute expected validation performance as is done in Sivaprasad et al. (2020) to decrease this cost significantly? If I understand HyperBand correctly, the hyperparameter configurations are still drawn independently via random search so that the expected validation performance could be computed, but I am not entirely sure.
One of Sivaprasad et al. (2020)'s motivations for using random search is that it requires no hyperparameters (except for the search space, which is however assumed to be given by optimizer designers), which could otherwise inject some human bias into the evaluation process. In contrast, HyperBand does have additional hyperparameters that could introduce human bias. E.g., you state "We set
η
=
3
as this default value performs consistently well, [...]". Can we be sure that this choice is not biased towards some optimizers?
Choi et al. (2019) make the case for choosing hyperparameter search spaces independently for each optimizer, noting that a unified search space may contain biases towards certain optimizers. You consider a unified search space, making the opposite argument by citing Metz et al. (2020). I could not find that argument in Metz et al. (2020). Could you please elaborate or point directly to their argument in their paper?
Among your summarized findings you state that Sivaprasad et al. (2020) find Adam to usually outperform SGD. This may be misleading, since they only suggest Adam to be more likely to yield good performance than SGD if nothing is known about the task. This is a very similar result to what the performance profile shows in your Figure 4. On CIFAR10 and CIFAR100, Sivaprasad et al. (2020) also find an SGD variant to perform as well or better than Adam.
Suggestions for improving the paper:
It would be good to provide not only average CPE and peak performance values, but also their variance.
In the description of Scenario I you state that you compute the expected performance under different time budgets. It would be good to clarify what you mean (I suspect peak performance vs. CPE?).
Minor issues:
In the related work on hyperparameter tuning methods, Sivaprasad et al. (2020) is falsely cited as a Bayesian optimization method.
The paper claims that Sivaprasad et al. (2020) consider optimizer B in Figure 1 as better than optimizer A. This is not true; towards the end of Section 2 in Sivaprasad et al. (2020) acknowledge that the value of each optimizer depends on the available budget. |
ICLR | Title
How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers
Abstract
Many optimizers have been proposed for training deep neural networks, and they often have multiple hyperparameters, which make it tricky to benchmark their performance. In this work, we propose a new benchmarking protocol to evaluate both end-to-end efficiency (training a model from scratch without knowing the best hyperparameter) and data-addition training efficiency (the previously selected hyperparameters are used for periodically re-training the model with newly collected data). For end-to-end efficiency, unlike previous work that assumes random hyperparameter tuning, which over-emphasizes the tuning time, we propose to evaluate with a bandit hyperparameter tuning strategy. A human study is conducted to show that our evaluation protocol matches human tuning behavior better than the random search. For data-addition training, we propose a new protocol for assessing the hyperparameter sensitivity to data shift. We then apply the proposed benchmarking framework to 7 optimizers and various tasks, including computer vision, natural language processing, reinforcement learning, and graph mining. Our results show that there is no clear winner across all the tasks.
1 INTRODUCTION
Due to the enormous data size and non-convexity, stochastic optimization algorithms have become widely used in training deep neural networks. In addition to Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951), many variations such as Adagrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2014) have been proposed. Unlike classical, hyperparameter free optimizers such as gradient descent and Newton’s method1, stochastic optimizers often hold multiple hyperparameters including learning rate and momentum coefficients. Those hyperparameters are critical not only to the speed, but also to the final performance, and are often hard to tune.
It is thus non-trivial to benchmark and compare optimizers in deep neural network training. And a benchmarking mechanism that focuses on the peak performance could lead to a false sense of improvement when developing new optimizers without considering tuning efforts. In this paper, we aim to rethink the role of hyperparameter tuning in benchmarking optimizers and develop new benchmarking protocols to reflect their performance in practical tasks better. We then benchmark seven recently proposed and widely used optimizers and study their performance on a wide range of tasks. In the following, we will first briefly review the two existing benchmarking protocols, discuss their pros and cons, and then introduce our contributions.
Benchmarking performance under the best hyperparameters A majority of previous benchmarks and comparisons on optimizers are based on the best hyperparameters. Wilson et al. (2017); Shah et al. (2018) made a comparison of SGD-based methods against adaptive ones under their best hyperparameter configurations. They found that SGD can outperform adaptive methods on several datasets under careful tuning. Most of the benchmarking frameworks for ML training also assume knowing the best hyperparameters for optimizers (Schneider et al., 2019; Coleman et al., 2017; Zhu et al., 2018). Also, the popular MLPerf benchmark evaluated the performance of optimizers under the best hyperparameter. It showed that ImageNet and BERT could be trained in 1 minute using the combination of good optimizers, good hyperparameters, and thousands of accelerators.
1The step sizes of gradient descent and Newton’s method can be automatically adjusted by a line search procedure (Nocedal & Wright, 2006).
Despite each optimizer’s peak performance being evaluated, benchmarking under the best hyperparameters makes the comparison between optimizers unreliable and fails to reflect their practical performance. First, the assumption of knowing the best hyperparameter is unrealistic. In practice, it requires a lot of tuning efforts to find the best hyperparameter, and the tuning efficiency varies greatly for different optimizers. It is also tricky to define the “best hyperparameter”, which depends on the hyperparameter searching range and grids. Further, since many of these optimizers are sensitive to hyperparameters, some improvements reported for new optimizers may come from insufficient tuning for previous work.
Benchmarking performance with random hyperparameter search It has been pointed out in several papers that tuning hyperparameter needs to be considered in evaluating optimizers (Schneider et al., 2019; Asi & Duchi, 2019), but having a formal evaluation protocol on this topic is nontrivial. Only recently, two papers Choi et al. (2019) and Sivaprasad et al. (2020) take hyperparameter tuning time into account when comparing SGD with Adam/Adagrad. However, their comparisons among optimizers are conducted on random hyperparameter search. We argue that these comparisons could over-emphasize the role of hyperparameter tuning, which could lead to a pessimistic and impractical performance benchmarking for optimizers. This is due to the following reasons: First, in the random search comparison, each bad hyperparameter has to run fully (e.g., 200 epochs). In practice, a user can always stop the program early for bad hyperparameters if having a limited time budget. For instance, if the learning rate for SGD is too large, a user can easily observe that SGD diverges in a few iterations and directly stops the current job. Therefore, the random search hypothesis will over-emphasize the role of hyperparameter tuning and does not align with a real user’s practical efficiency. Second, the performance of the best hyperparameter is crucial for many applications. For example, in many real-world applications, we need to re-train the model every day or every week with newly added data. So the best hyperparameter selected in the beginning might benefit all these re-train tasks rather than searching parameters from scratch. In addition, due to the expensive random search, random search based evaluation often focuses on the low-accuracy region2, while practically we care about the performance for getting reasonably good accuracy.
Our contributions Given that hyperparameter tuning is either under-emphasized (assuming the best hyperparameters) or over-emphasize (assuming random search) in existing benchmarking protocols and comparisons, we develop new evaluation protocols to compare optimizers to reflect the real use cases better. Our evaluation framework includes two protocols. First, to evaluate the end-to-end training efficiency for a user to train the best model from scratch, we develop an efficient evaluation protocol to compare the accuracy obtained under various time budgets, including the hyperparameter tuning time. Instead of using the random search algorithm, we adopt the Hyperband (Li et al., 2017) algorithm for hyperparameter tuning since it can stop early for bad configurations and better reflect the real running time required by a user. Further, we also propose to evaluate the data addition training efficiency for a user to re-train the model with some newly added training data, with the knowledge of the best hyperparameter tuned in the previous training set. We also conduct human studies to study how machine learning researchers are tuning parameters in optimizers and how that aligns with our proposed protocols.
Based on the proposed evaluation protocols, we study how much progress has recently proposed algorithms made compared with SGD or Adam. Note that most of the recent proposed optimizers have shown outperforming SGD and Adam under the best hyperparameters for some particular tasks, but it is not clear whether the improvements are still significant when considering hyper-parameter tuning, and across various tasks. To this end, we conduct comprehensive experiments comparing state-of-the-art training algorithms, including SGD (Robbins & Monro, 1951), Adam (Kingma & Ba, 2014), RAdam (Liu et al., 2019), Yogi (Zaheer et al., 2018), LARS (You et al., 2017), LAMB (You et al., 2019), and Lookahead (Zhang et al., 2019), on a variety of training tasks including image classification, generated adversarial networks (GANs), sentence classification (BERT fine-tuning), reinforcement learning and graph neural network training. Our main conclusions are: 1) On CIFAR-10 and CIFAR-100, all the optimizers including SGD are competitive. 2) Adaptive methods are generally better on more complex tasks (NLP, GCN, RL). 3) There is no clear winner among adaptive methods. Although RAdam is more stable than Adam across tasks, Adam is still a very competitive baseline even compared with recently proposed methods.
2For instance, Sivaprasad et al. (2020) only reaches < 50% accuracy in their CIFAR-100 comparisons.
2 RELATED WORK
Optimizers. Properties of deep learning make it natural to apply stochastic first order methods, such as Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951). Severe issues such as a zig-zag training trajectory and a uniform learning rate have been exposed, and researchers have then drawn extensive attention to modify the existing SGD for improvement. Along this line of work, tremendous progresses have been made including SGDM (Qian, 1999), Adagrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014). These methods utilize momentums to stabilize and speed up training procedures. Particularly, Adam is regarded as the default algorithm due to its outstanding compatibility. Then variants such as Amsgrad (Reddi et al., 2019), Adabound (Luo et al., 2019), Yogi (Zaheer et al., 2018), and RAdam (Liu et al., 2019) have been proposed to resolve different drawbacks of Adam. Meanwhile, the requirement of large batch training has inspired the development of LARS (You et al., 2017) and LAMB (You et al., 2019). Moreover, Zhang et al. (2019) has put forward a framework called Lookahead to boost optimization performance by iteratively updating two sets of weights.
Hyperparameter tuning methods. Random search and grid search (Bergstra & Bengio, 2012) can be a basic hyperparameter tuning method in the literature. However, the inefficiency of these methods stimulates the development of more advanced search strategies. Bayesian optimization methods including Bergstra et al. (2011) and Hutter et al. (2011) accelerate random search by fitting a black-box function of hyperparameter and the expected objective to adaptively guide the search direction. Parallel to this line of work, Hyperband (Li et al., 2017) focuses on reducing evaluation cost for each configuration and early terminates relatively worse trials. Falkner et al. (2018) proposes BOHB to combine the benefits of both Bayesian Optimization and Hyperband. All these methods still require huge computation resources. A recent work (Metz et al., 2020) has tried to obtain a list of potential hyperparameters by meta-learning from thousands of representative tasks. We strike a balance between effectiveness and computing cost and leverage Hyperband in our evaluation protocol to compare a wider range of optimizers.
3 PROPOSED EVALUATION PROTOCOLS
In this section, we introduce the proposed evaluation framework for optimizers. We consider two evaluation protocols, each corresponding to an important training scenario:
• Scenario I (End-to-end training): This is the general training scenario, where a user is given an unfamiliar optimizer and task, the goal is to achieve the best validation performance after several trials and errors. In this case, the evaluation needs to include hyperparameter tuning time. We develop an efficiency evaluation protocol to compare various optimizers in terms of CPE and peak performance. • Scenario II (Data-addition training): This is another useful scenario encountered in many applications, where the same model needs to be retrained regularly after collecting some fresh data. In this case, a naive solution is to reuse the previously optimal hyperparameters and retrain the model. However, since the distribution is shifted, the result depends on the sensitivity to that shift.
We describe the detailed evaluation protocol for each setting in the following subsections.
3.1 END-TO-END TRAINING EVALUATION PROTOCOL
Before introducing our evaluation protocol for Scenario I, we first formally define the concept of optimizer and its hyperparameters. Definition 1. An optimizer is employed to solve a minimization problem minθ L(θ) and can be defined by a tuple o ∈ O = (U ,Ω), where O contains all types of optimizers. U is a specific update rule and Ω = (ω1, . . . , ωN ) ∈ RN represents a vector ofN hyperparameters. Search space of these hyperparameters is denoted by F . Given an initial parameter value θ0, together with a trajectory of optimization procedure Ht = {θs,L(θs),∇L(θs)}, the optimizer updates θ by
θt+1 = U(Ht,Ω).
We aim to evaluate the end-to-end time for a user to get the best model, including the hyperparameter tuning time. A recent work (Sivaprasad et al., 2020) assumes that a user conducts random search for finding the best hyperparameter setting. Still, we argue that the random search procedure will over-emphasize the importance of hyperparameters when tuning is considered — it assumes a user
never stops the training even if they observe divergence or bad results in the initial training phase, which is unrealistic.
Figure 1 illustrates why random search might not lead to a fair comparison of optimizers. In Figure 1, we are given two optimizers, A and B, and their corresponding loss w.r.t. hyperparameter. According to Sivaprasad et al. (2020), optimizer B is considered better than optimizer A under a constrained budget since most regions of the hyperparameter space of A outperforms B. For instance, suppose we randomly sample the same hyperparamter setting for A and B. The final config ω∗r (B) found under this strategy can have a lower expected loss than that of ω∗r (A), as shown in Figure 1a. However, there exists a more practical search strategy which can invalidate this statement with the assumption of a limited searching budget: a user can early terminate a configuration trial when trapped in bad results or diverging. Hence, we can observe in Figure 1b that for optimizer A, this strategy earlystops many configurations and only allow a limited number of trials to explore to the deeper stage. Therefore, the bad hyperparameters will not affect the overall efficiency of Optimizer A too much. In contrast, for optimizer B, performances of different hyperparameters are relatively satisfactory and hard to distinguish, resulting in similar and long termination time for each trial. Therefore, it may be easier for a practical search strategy p to find the best configuration ω∗p(A) of optimizer A than ω∗p(B), given the same constrained budget.
This example suggests that random search may over-emphasize the parameter sensitivity when benchmarking optimizers. To better reflect a practical hyperparameter tuning scenario, our evaluation assumes a user applies Hyperband (Li et al., 2017), a simple but effective hyperparameter tuning scheme to get the best model. Hyperband formulates hyperparameter optimization as a unique bandit problem. It accelerates random search through adaptive resource allocation and earlystopping, as demonstrated in Figure 1b. Compared with its more complicated counterparts such as BOHB (Falkner et al., 2018), Hyperband requires less computing resources and performs similarly within a constrained budget. The algorithm is presented in Appendix A.
Despite different hyperparameter tuning algorithms, human tuning by experts is still regarded as the most effective. To verify and support that Hyperband is an effective method and even competitive with humans, we conduct a human study as follows: for image classification on CIFAR10, given 10 learning rate configurations of SGD in the grid [1.0 × 10−8, 1.0 × 10−7, 1.0 × 10−6, . . . , 10], participants are requested to search the best one at their discretion. Namely, they can stop or pause a trial any time and continue to evaluate a new configuration until they feel it has already reached the best performance. 10 participants are sampled randomly from Ph.D. students with computer science backgrounds.
We collect their tuning trajectories and average them as human performance, which is considered “optimal” in this human study. In Figure 2, we plot curves for hyperparameter tuning of human, Hyperband, random search, random search with an early stopping (ES) strategy in Sivaprasad et al. (2020), and Hyperband with ES. We find that Hyperband matches humans’ behavior better, while random search tends to trap in suboptimal configurations although random with early stopping can mitigate this issue to some extent. This finding shows the advantage of Hyperband over random
search regardless of early stopping, and justifies the use of Hyperband in optimizer benchmarking. More details of this human study can be found in Appedix B.
With Hyperband incorporated in end-to-end training, we assume that each configuration is run sequentially and record the best performance obtained at time step t as Pt. Specifically, Pt represents the evaluation metric for each task, e.g., accuracy for image classification and return for reinforcement learning. {Pt}Tt=1 forms a trajectory for plotting learning curves on test set like Figure 3. Although it is intuitive to observe the performance of different optimizers according to such figures, summarizing a learning curve into a quantifiable, scalar value can be more insightful for evaluation. Thus, as shown in Eq. 1, we use λ-tunability defined in Sivaprasad et al. (2020) to further measure the performance of optimizers:
λ-tunability = ∑T
t=1 λt · Pt, where ∑ t λt = 1 and ∀tλt > 0. (1)
One intuitive way is to set λt = 1t=T to determine which optimizer can reach the best model performance after the whole training procedure. However, merely considering the peak performance is not a good guidance on the choice of optimizers. In practice, we tend to take into account the complete trajectory and exert more emphasis on the early stage. Thus, we employ the Cumulative Performance-Early weighting scheme where λt ∝ (T − i), to compute λ-tunablity instead of the extreme assignment λt = 1t=T . The value obtained is termed as CPE for simplicity.
We present our evaluation protocol in Algorithm 1. As we can see, end-to-end training with hyperparameter optimization is conducted for various optimizers on the given task. The trajectory {Pt}Tt=1 is recorded to compute the peak performance as well as CPE value. Note that the procedure is repeated M times to obtain a reliable result. We use M = 3 in all experiments. More details of time cost and acceleration of the algorithm can be found in Appendix E.
Algorithm 1 End-to-End Efficiency Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A, feasible search space F
1: for o ∈ O do 2: for i = 1 to M do 3: Conduct hyperparameter search in F with the optimizer o using HyperBand on a 4: Record the performance trajectory {Pt}Tt=1 explored by HyperBand 5: Calculate the peak performance and CPE by Eq. 1 6: end for 7: Average peak and CPE values over M repetitions for the optimizer o 8: end for 9: Evaluate optimizers according to their peak and CPE values
3.2 DATA-ADDITION TRAINING EVALUATION PROTOCOL
In Scenario II, we have a service (e.g., a search or recommendation engine) and we want to re-train the model every day or every week with some newly added training data. One may argue that an online learning algorithm should be used in this case, but in practice online learning is unstable and industries still prefer this periodically retraining scheme which is more stable.
In this scenario, once the best hyperparameters were chosen in the beginning, we can reuse them for every training, so no hyperparameter tuning is required and the performance (including both efficiency and test accuracy) under the best hyperparameter becomes important. However, an implicit assumption made in this process is that “the best hyperparameter will still work when the training task slightly changes”. This can be viewed as transferability of hyperparameters for a particular optimizer, and our second evaluation protocol aims to evaluate this practical scenario.
We simulate data-addition training with all classification tasks, and the evaluation protocol works as follows: 1) Extract a subset Dδ containing partial training data from the original full dataset D with a small ratio δ; 2) Conduct a hyperparameter search on Dδ to find the best setting under this scenario; 3) Use these hyperparameters to train the model on the complete dataset; 4) Observe the potential change of the ranking of various optimizers before and after data addition. For step 4) when comparing different optimizers, we will plot the training curve in the full-data training stage in Section 4, and also summarize the training curve using the CPE value. The detailed evaluation protocol is described in Algorithm 2.
Algorithm 2 Data-Addition Training Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A with a full dataset D, a split ratio δ
1: for o ∈ O do 2: for i = 1 to M do 3: Conduct hyperparameter search with the optimizer o using Hyperband on awith a partial
dataset Dδ , and record the best hyperparameter setting Ωpartial found under this scenario 4: Apply the optimizer with Ωpartial on Dδ and D, then save the training curves 5: end for 6: Average training curves of o over M repetitions to compute CPE 7: end for 8: Compare performance of different optimizers under data-addition training
4 EXPERIMENTAL RESULTS
Optimizers to be evaluated. As shown in Table 1, we consider 7 optimizers including nonadaptive methods using only the first-order momentum, and adaptive methods considering both firstorder and second-order momentum. We also provide lists of tunable hyperparameters for different optimizers in Table 1. Moreover, we consider following two combinations of tunable hyperparameters to better investigate the performance of different optimizers: a) only tuning initial learning rate with the others set to default values and b) tuning a full list of hyperparameters. A detailed description of optimizers as well as default values and search range of these hyperparameters can be found in Appendix E. Note that we adopt a unified search space for a fair comparison following Metz et al. (2020), to eliminate biases of specific ranges for different optimizers. The tuning budget of Hyperband is determined by three items: maximum resource (in this paper we use epoch) per configuration R, reduction factor η, and number of configurations nc. According to Li et al. (2017), a single Hyperband execution contains ns = blogη(R)c + 1 of SuccessiveHalving, each referred to as a bracket. These brackets take strategies from least to most aggressive early-stopping, and each one is designed to use approximately B = R · ns resources, leading to a finite total budget. The number of randomly sampled configurations in one Hyperband run is also fixed and grows with R. Then given R and η, nc determines the repetition times of Hyperband. We set η = 3 as this default value performs consistently well, and R to a value which each task usually takes for a complete run. For nc, it is assigned as what is required for a single Hyperband execution for all tasks, except for BERT fine-tuning, where a larger number of configurations is necessary due to a relatively small R. In Appendix E, we give assigned values of R, η, and nc for each task.
Tasks for benchmarking. For a comprehensive and reliable assessment of optimizers, we consider a wide range of tasks in different domains. When evaluating end-to-end training efficiency, we implement our protocol on tasks covering several popular and promising applications in Table 2. Apart from common tasks in computer vision and natural language processing, we introduce two extra tasks in graph neural network training and reinforcement learning. For simplicity, we will use the dataset to represent each task in our subsequent tables of experimental results. (For the reinforcement learning task, we just use the environment name.) The detailed settings and parameters for each task can be found in Appendix D.
4.1 END-TO-END EFFICIENCY (SECNARIO I)
To evaluate end-to-end training efficiency, we adopt the protocol in Algorithm 1. Specifically, we record the average training trajectory with Hyperband {Pt}Tt=1 for each optimizer on benchmarking tasks, where Pt is the evaluation metric for each task (e.g., accuracy, return). We visualize these trajectories in Figure 3 for CIFAR10 and CIFAR100, and calculate CPE and peak performance in Table 3 and 9 respectively. More results for other tasks and the peak performance can be found in Appendix F. Besides, in Eq. 2 we compute performance ratio ro,a for each optimizer and each task, and then utilize the distribution function of a performance metric called performance profile ρo(τ) to summarize the performance of different optimizers over all the tasks. For tasks where a lower CPE is better, we just use ro,a = CPEo,a/min{CPE o,a} instead to guarantee ro,a ≥ 1. The function ρo(τ) for all optimizers is presented in Figure 4. Based on the definition of performance profile (Dolan & Moré, 2002), the optimizers with large probability ρo(τ) are to be preferred. In particular, the value of ρo(1) is the probability that one optimizer will win over the rest and can be a reference for selecting the proper optimizer for an unknown task. We also provided a probabilistic performance profile to summarize different optimizers in Figure 7 in Appendix F.
ro,a = max{CPEo,a : o ∈ O}
CPEo,a , ρo(τ) =
1 |A| size { a ∈ A : ro,a ≤ τ } . (2)
Our findings are summarized below:
• It should be emphasized from Table 3 and 9, that under our protocol based on Hyperband, SGD performs similarly to Adam in terms of efficiency as well as peak performance, and can even surpass it in some cases like training on CIFAR100. Under Hyperband, the best configuration of SGD is less tedious to find than random search because Hyperband can early-stop bad runs and thus they will affect less to the search efficiency and final performance. • For image classification tasks all the methods are competitive, while adaptive methods tend to perform better in more complicated tasks (NLP, GCN, RL). • There is no significant distinction among adaptive variants. Performance of adaptive optimizers tends to fall in the range within 1% of the best result. • According to performance profile in Figure 4, the RAdam achieves probability 1 with the smallest τ , and Adam is the second method achieving that. This indicates that RAdam and Adam are achieving relatively stable and consistent performance among these tasks.
4.2 DATA-ADDITION TRAINING (SCENARIO II)
We then conduct evaluation on data-addition training based on the protocol in Algorithm 2. We choose four classification problems on CIFAR10, CIFAR100, MRPC and PPI since the setting do not apply to RL. We search the best hyperparameter configuration, denoted by Ωpartial, under the sub training set with the ratio δ = 0.3. Here we tune all hyperparameters. Then we directly apply Ωpartial on the full dataset for a complete training process. The training curves are shown in Figure 5, and we also summarize the training curve with CPE by Eq. 1 in Table 4. We have the following findings:
• There is no clear winner in data-addition training. RAdam is outperforming other optimizers in 2/4 tasks so is slightly preferred, but other optimizers except Lookahead are also competitive (within 1% range) on at least 2/4 tasks. • To investigate whether the optimizer’s ranking will change when adding 70% data, we compare the training curve on the original 30% data versus the training curve on the full 100% data in Figure 5. We observe that the ranking of optimizers slightly changes after data addition.
5 CONCLUSIONS AND DISCUSSIONS
In conclusion, we found there is no strong evidence that newly proposed optimizers consistently outperform Adam, while each of them may be good for some particular tasks. When deciding the choice of the optimizer for a specific task, people can refer to results in Table 3 and 9. If the task is contained in Table 2, he/she can directly choose the one with the best CPE or best peak performance based on his/her goal of the task (easy to tune or high final performance). On the other hand, even though the desired task is covered, people can also gain some insights from the results of the most similar task in Table 2, or refer to the performance profile in Figure 4 to pick adaptive methods like Adam. Besides choosing the optimizer, it will contribute to designing a new optimizer as well. Using our protocol to evaluate a new optimizer can show whether it has obvious improvement over existing optimizers, and can serve as a routine to judge the performance of the optimizer thoroughly.
In addition to the proposed two evaluation criteria, there could be other factors that affect the practical performance of an optimizer. First, the memory consumption is becoming important for training large DNN models. For instance, although Lookahead performs well in certain tasks, it requires more memory than other optimizers, restricting their practical use in some memory constrained applications. Another important criterion is the scalability of optimizers. When training with a massively distributed system, optimizing the performance of a large batch regime (e.g., 32K batch size for ImageNet) is important. LARS and LAMB algorithms included in our study are developed for large batch training. We believe it is another important metric for comparing optimizers worth studying further.
A HYPERBAND
We present the whole algorithm for Hyperband in Algorithm 3, and you can refer to Li et al. (2017) for more details.
Algorithm 3 Hyperband Input: R, η Initialization: smax = blogη Rc , B = (smax + 1)R
1: for s ∈ {smax, smax − 1, . . . , 0} do 2: n = dBR ηs (s+1)e, r = Rη −s 3: // begin SuccessiveHalving with (n, r) inner loop 4: T = random get configuration(n) 5: for i ∈ {0, . . . , s} do 6: ni = bnη−ic, ri = rηi 7: L = {run then return val loss(t, ri): t ∈ T} 8: T = top k(T, L, bni/ηc) 9: end for
10: end for return Hyperparameter configuration with the smallest loss seen so far
B DETAILS OF HUMAN STUDY
In this human study, 10 participants are sampled from Ph.D. students with computer science backgrounds (machine learning backgrounds specifically). They are recruited as follows: We first asked the administrators to distribute the program of this human study to Ph.D. students in machine learning labs in our institutions. Provided the population, we assumed that they had prior knowledge about some basic machine learning experiments, such as image classification on MNIST and CIFAR10. They were requested to conduct hyperparameter tuning of learning rate based on their knowledge. They were informed that the target experiment was image classification on CIFAR10 with SGD, and the search range of learning rate was in the grid [1.0 × 10−8, 1.0 × 10−7, 1.0 × 10−6, . . . , 10]. Each configuration was run for 200 epochs at most. Moreover, we also told them that they could pause any configuration if they wanted to evaluate others, and even stop the whole tuning process only when they thought the accuracy could not be further improved and the number of total tuning epochs exceeded 600 at the same time. We collected results from 17 people and we
determined the validity by checking whether the length of the trajectory was greater than 600 and removed 7 invalidity trajectories. Finally there remained 10 trajectories and we averaged them as the human performance in Figure 2.
C OPTIMIZERS
Notations. Given a vector of parameters θ ∈ Rd, we denote a sub-vector of its i-th layer’s parameters by θ(i). {αt}Tt=1 is a sequence of learning rates during the optimization procedure of a horizon T . {φt, ψt}Tt=1 represents a sequence of functions to calculate the first-order and second-order momentum of the gradient gt, which aremt and vt respectively at time step t. Different optimization algorithms are usually specified by the choice of φ(·) and ψ(·). {rt}Tt=1 is an additional sequence of adaptive terms to modify the magnitude of the learning rate in some methods. For algorithms only using the first-order momentum, µ is the , while β1 and β2 are coefficients to compute the running averages m and v. is a small scalar (e.g., 1× 10−8) used to prevent division by 0. Generic optimization framework. Based on, we further develop a thorough generic optimization framework including an extra adaptive term in Algorithm 4. The debiasing term used in the original version of Adam is ignored for simplicity. Note that for {αt}Tt=1, different learning rate scheduling strategies can be adopted and the choice of scheduler is also regarded as a tunable hyperparameter. Without loss of generality, in this paper, we only consider a constant value and a linear decay (Shallue et al., 2018) in the following equation, introducing γ as a hyperparameter.
αt = { α0, constant; α0 − (1− γ)α0 tT , linear decay.
With this generic framework, we can summarize several popular optimization methods by explicitly specifying mt, vt and rt in Table 5. It should be clarified that Lookahead is an exception of the generic framework. In fact it is more like a high-level mechanism, which can be incorporated with any other optimizer. However, as stated in Zhang et al. (2019), this optimizer is robust to inner optimization algorithm, k, and αs in Algorithm 5, we still include Lookahead here with Adam as the base for a more convincing and comprehensive evaluation. We consider Lookahead as a special adaptive method, and tune the same hyperparamters for it as other adaptive optimziers.
Algorithm 4 Generic framework of optimization methods Input: parameter value θ1, learning rate with scheduling {αt}, sequence of functions {φt, ψt, χt}Tt=1 to compute mt, vt, and rt respectively.
1: for t = 1 to T do 2: gt = ∇ft(θt) 3: mt = φt(g1, · · · , gt) 4: vt = ψt(g1, · · · , gt) 5: rt = χt(θt,mt, vt) 6: θt+1 = θt − αtrtmt/ √ vt 7: end for
Algorithm 5 Lookahead Optimizer Input: Initial parameters θ0, objective function f , synchronization period k, slow weights step size αs, optimizer A
1: for t = 1, 2, . . . do 2: Synchronize parameters θ̂t,0 ← θt−1 3: for i = 1, 2, . . . , k do 4: Sample minibatch of data d ∈ D 5: θ̂t,i ← θ̂t,i−1 +A(L, θ̂t,i−1, d) 6: end for 7: Perform outer update θt ← θt−1 + αs(θ̂t,k − θt−1) 8: end for
return Parameters θ
D TASK DESCRIPTION
We make a concrete description of tasks selected for our optimizer evaluation protocol:
• Image classifcation. For this task, we adopt a ResNet-50 (He et al., 2016) model on CIFAR10 and CIFAR100 with a batch size of 128 and the maximum epoch of 200 per trial. • VAE. We use a vanilla variational autoencoder in Kingma & Welling (2013) with five convolutional and five deconvolutional layers with a latent space of dimension 128 on CelebA. There are no dropout layers. Trained with a batch size of 144. • GAN. We train SNGAN with the same network architecture and objective function with spectral normalization for CIFAR10 in Miyato et al. (2018), and the batch size of the generator and the discriminator is 128 and 64 respectively. • Natural language processing. In this domain, we finetune RoBERTa-base on MRPC, one of the test suit in GLUE benchmark. For each optimizer, we set the maximal exploration budget to be 800 epochs. The batch size is 16 sentences. • Graph learning. Among various graph learning problems, we choose node classification as semisupervised classification. In GCN training, in there are multiple ways to deal with the neighborhood explosion of stochastic optimizers. We choose Cluster-GCN Chiang et al. (2019) as the backbone to handle neighborhood expansion and PPI as the dataset. • Reinforcement learning. We select Walker2d-v3 from OpenAI Gym (Brockman et al. (2016)) as our training environment, and PPO (Schulman et al. (2017)), implemented by OpenAI SpinningUp (Achiam (2018)), as the algorithm that required tuning. We use the same architectures for both action value network Q and the policy network π. We define 40,000 of environment interactions as one epoch, with a batch size of 4,000. The return we used is the highest average test return of an epoch during the training.
E IMPLEMENTATION DETAILS
Implementation details of our experiments are provided in this section. Specifically, we give the unified search space for all hyperparamters and their default values in Table 6. Note that we tune the learning rate decay factor for image classification tasks when tuning every hyperparamter. For the task on MRPC, γ is tuned for all experiments. In other cases, we only tune original hyperparamters without a learning rate scheduler.
In addition, Hyperband parameter values for each task are listed in Table 7. These parameters are assigned based on properties of different tasks.
For time cost of our evaluation protocols, it depends on how many budgets are available. Specifically, in our paper, the unit of time budget is one epoch, then the total time will be Bepoch ∗ Tepoch, where Bepoch is the total available budget and Tepoch is the running time for one epoch. There is no additional computational cost, i.e., running our protocol once takes the same time as running one hyperparameter search with Hyperband. In our experiment on CIFAR10, we roughly evaluated 200 hyperparameter configurations in one Hyperband running, while the same time can only allow about 50 configurations in random search.
Moreover, we can further accelerate our evaluation protocol by resampling, shown in Algorithm 6. The basic idea is that we keep a library of different hyperparameter settings. At the beginning, the library is empty. And in each repetition, we sample a number of configurations required by running Hyperband once. During the simulation of Hyperband, we just retrieve the value from the library if the desired epoch of current configuration is contained in the library. Otherwise, we run this configuration based on Hyperband, and store the piece of the trajectory to the library.
Algorithm 6 Alternative End-to-End Efficiency Evaluation Protocol Input: A set of optimizers O = {o : o = (U ,Ω)}, task a ∈ A, search space F , library size S
1: for o ∈ O do 2: Sample S configurations from F , initialize the library with an empty list for each setting 3: for i = 1 to M do 4: Simulate Hyperband with o using configurations re-sampled from the library on a 5: if the desired accuracy is pre-computed in the library then 6: Retrieve the value directly 7: else 8: Training normally, and store values to the library 9: end if
10: Record the performance trajectory {Pt}Tt=1 explored by HyperBand 11: Calculate the peak performance and CPE by Eq. 1 12: end for 13: Average peak and CPE values over M repetitions for the optimizer o 14: end for 15: Evaluate optimizers according to their peak and CPE values
F ADDITIONAL RESULTS
More detailed experimental results are reported in this section.
F.1 IMPACT OF η
Since there is an extra hyperparameter, the reduction factor η in Hyperband, we conduct an experiment with different values (η = 2, 3, 4, 5) to observe the potential impact of this additional
hyperparameter on our evaluation. Specifically, we use Hyperband to tune the learning rate for three optimizers, SGD, Adam, and Lookahead on CIFAR10, and results are presented in Table 8. As we can see, although the change of η may lead to different CPE values, the relative ranking among three optimizers remains unchanged. Besides, they all achieve comparable peak performance at the end of training. Considering the efficiency of Hyperband, we choose η = 3 based on the convention in Li et al. (2017) in all our experiments.
F.2 END-TO-END TRAINING
Table 9 shows peak performance for optimizers on each task. For GAN, we only conduct evaluation on optimizers tuning learning rate due to time limit, and present its CPE and peak performance in Table 10. There is also an end-to-end training curve for GAN on CIFAR10 in Figure 6. Figures for end-to-end training curves on the rest of tasks are shown in Figure 9 and 10.
Optimizer GAN-CIFAR10↓ Tune learning rate only:
SGD 113.25 (50.08) Adam 77.04 (25.14)
RAdam 73.85 (19.61) Yogi 76.80 (25.36) LARS 157.71 (73.82) LAMB 68.47 (25.55)
Lookahead 65.61 (20.40)
Besides the general performance profile, we present a probabilistic version described in Barreto et al. (2010) in Figure 7 to account for randomness. The probabilistic performance profile can be formulated as
ρ̄o(τ) = 1 |A| ∑ a P(r̄o,a ≤ τ), r̄o,a ∼ N ( µo,a ba , σ2o,a b2a ), (3)
where µo,a and σo,a are the mean and standard deviation of CPE of the optimizer o on the task a respetively, and ba is just the best expectation of CPE on a among all optimizers. It can be seen in Figure 7 that the probabilistic performance profiles shows a similar trend to Figure 4.
We also attach two end-to-end training trajectories on CIFAR10 with the error in Figure 8. Since it is hard to distinguish optimizers with the standard deviation added, we just instead report the std of CPE and peak performance in Table 3 and 9.
F.3 DATA ADDITION TRAINING
We provide training curves for data-addition training on full MRPC and PPI dataset in Figure 11. | 1. What are the strengths and weaknesses of the proposed evaluation procedure for optimizers in deep learning?
2. How does the paper characterize the performances of optimization algorithms, and are these presentations accurate?
3. What are some concerns regarding the randomness in the experiments, and how can uncertainty be better accounted for in the results?
4. Can the authors provide more details about the human experiment that supports the use of Hyperband, such as how it was conducted and whether an institutional review board approved it?
5. Are there any issues with the arguments made in the paper, and do they accurately reflect the goals of the evaluation procedure?
6. How does the proposed evaluation procedure better capture practitioners' interests than existing procedures, and what are its limitations? | Review | Review
This paper studies the topic of evaluating the performance of optimizers for neural networks. The paper makes the argument that existing evaluation procedures either over emphasize the finding of optimal hyperparameters or under-evaluate the performance of an algorithm by randomly sampling hyperparameters. This paper's primary objective is to propose an evaluation procedure that better aligns with a practitioner's goal than existing evaluation procedures. The proposed procedure evaluates optimization algorithms by using the hyperband hyperparameter optimization algorithm to tune hyperparameters and then score the algorithm using a weighted combination of validation performance scores over regularly sampled training intervals. The aggregate performances of algorithms are then ranked using performance profiles.
The paper's main contributions are an evaluation procedure and a new problem setup where an algorithm is evaluated over its long term use as new data is added. The presented evaluation procedure better captures practitioners' interest in that they tend to care about how much "effort" is required to find a near-optimal solution when allowed to tune the algorithm's parameters. The second evaluation procedure best captures the practitioners' interest by considering performance over retraining the model as new data is added. This evaluation setup is a good direction for evaluating optimizers. I think the paper accomplishes its goals and could be a useful evaluation procedure for the community.
Despite what this paper does well, I cannot recommend it for acceptance because there are issues with the paper's arguments and some gaps in the evaluation procedure.
I believe the paper mischaracterizes the performances being reported. The performance of the algorithms being reported is not the performance of an optimization algorithm but a meta-algorithm that combines the optimization algorithm and hyperband. Furthermore, the performance depends directly on the hyperband algorithm's hyperparameters, but these are not accounted for in the evaluation. I think it is ok to evaluate these meta algorithms, but their performance should not be presented as representing the underlying optimization algorithm. Another way to view these meta algorithms is that they are performing a global search using successive applications of local search algorithms (the optimizers). In this view, it is evident that the random hyperparameter search method is inferior to hyperband. However, it also becomes clear that one should use whatever global search method is best and not rely solely on hyperband.
Can the authors specify an exact research question this procedure is designed to answer? It should be evident directly from this question what the right way to evaluate the performance is.
Why is being similar in performance to a human's ability to tune hyperparameters desirable? Shouldn't it be better? How was the study using humans conducted? Did an institutional review board approve it? The primary support for using Hyperband is that it is similar to human performance. The details of this human experiment are needed to establish how and why they are similar.
The performance of all of these experiments are random. How is randomness accounted for in the results? How many trials are needed to ensure a statistically significant result? Quantifying uncertainty is needed at both the per task level and the aggregate measure, similar to that shown by Jordan et al. (2020). The author’s may also be interested in probabilistic performance profiles (Barreto et al., 2010). Quantification of uncertainty is a necessary component for a scientifically rigorous evaluation procedure.
Minor notes: The second paragraph in the intro says a "biased benchmark." What does it mean for a benchmark to be biased? Every benchmark is biased to favor one method or another.
Page 4: "Still, we argue that the random search procedure will overemphasize the importance of hyperparameters" - this depends on what question is being answered. For example, one could ask a question about an algorithm's performance without hyperparameter tuning. Random hyperparameter search is then a good route.
In the RL experiments, it is said that the reward is the metric used. This is incorrect. The metric for that environment is the return or cumulative reward.
Barreto, A. M., Bernardino, H. S., & Barbosa, H. J. (2010, July). Probabilistic performance profiles for the experimental evaluation of stochastic algorithms. In Proceedings of the 12th annual conference on Genetic and evolutionary computation (pp. 751-758).
Jordan, S. M., Chandak, Y., Cohen, D., Zhang, M., & Thomas, P. S. (2020). Evaluating the Performance of Reinforcement Learning Algorithms. In Proceedings of the 37th International Conference on Machine Learning.
update
After the discussions below I have changed my score from a 5 to a 6. |
ICLR | Title
On Dynamic Noise Influence in Differential Private Learning
Abstract
Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. Private Gradient Descent (PGD) is a commonly used private learning framework, which noises gradients based on the Differential Privacy protocol. Recent studies show that dynamic privacy schedules of decreasing noise magnitudes can improve loss at the final iteration, and yet theoretical understandings of the effectiveness of such schedules and their connections to optimization algorithms remain limited. In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions. We first present a dynamic noise schedule minimizing the utility upper bound of PGD, and show how the noise influence from each optimization step collectively impacts utility of the final model. Our study also reveals how impacts from dynamic noise influence change when momentum is used. We empirically show the connection exists for general non-convex losses, and the influence is greatly impacted by the loss curvature.
1 INTRODUCTION
In the era of big data, privacy protection in machine learning systems is becoming a crucial topic as increasing personal data involved in training models (Dwork et al., 2020) and the presence of malicious attackers (Shokri et al., 2017; Fredrikson et al., 2015). In response to the growing demand, differential-private (DP) machine learning (Dwork et al., 2006) provides a computational framework for privacy protection and has been widely studied in various settings, including both convex and non-convex optimization (Wang et al., 2017; 2019; Jain et al., 2019).
One widely used procedure for privacy-preserving learning is the (Differentially) Private Gradient Descent (PGD) (Bassily et al., 2014; Abadi et al., 2016). A typical gradient descent procedure updates its model by the gradients of losses evaluated on the training data. When the data is sensitive, the gradients should be privatized to prevent excess privacy leakage. The PGD privatizes a gradient by adding controlled noise. As such, the models from PGD is expected to have a lower utility as compared to those from unprotected algorithms. In the cases where strict privacy control is exercised, or equivalently, a tight privacy budget, accumulating effects from highly-noised gradients may lead to unacceptable model performance. It is thus critical to design effective privatization procedures for PGD to maintain a great balance between utility and privacy.
Recent years witnessed a promising privatization direction that studies how to dynamically adjust the privacy-protecting noise during the learning process, i.e., dynamic privacy schedules, to boost utility under a specific privacy budget. One example is (Lee & Kifer, 2018), which reduced the noise magnitude when the loss does not decrease, due to the observation that the gradients become very small when approaching convergence, and a static noise scale will overwhelm these gradients. Another example is (Yu et al., 2019), which periodically decreased the magnitude following a predefined strategy, e.g., exponential decaying or step decaying. Both approaches confirmed the empirically advantages of decreasing noise magnitudes. Intuitively, the dynamic mechanism may coordinate with certain properties of the learning task, e.g., training data and loss surface. Yet there is no theoretical analysis available and two important questions remain unanswered: 1) What is the form of utility-preferred noise schedules? 2) When and to what extent such schedules improve utility?
To answer these questions, in this paper we develop a principled approach to construct dynamic schedules and quantify their utility bounds in different learning algorithms. Our contributions
are summarized as follows. 1) For the class of loss functions satisfying the Polyak-Lojasiewicz condition (Polyak, 1963), we show that a dynamic schedule improving the utility upper bound is shaped by the influence of per-iteration noise on the final loss. As the influence is tightly connected to the loss curvature, the advantage of using dynamic schedule depends on the loss function consequently. 2) Beyond gradient descent, our results show the gradient methods with momentum implicitly introduce a dynamic schedule and result in an improved utility bound. 3) We empirically validate our results on convex and non-convex (no need to satisfy the PL condition) loss functions. Our results suggest that the preferred dynamic schedule admits the exponentially decaying form, and works better when learning with high-curvature loss functions. Moreover, dynamic schedules give more utility under stricter privacy conditions (e.g., smaller sample size and less privacy budget).
2 RELATED WORK
Differentially Private Learning. Differential privacy (DP) characterizes the chance of an algorithm output (e.g., a learned model) to leak private information in its training data when the output distribution is known. Since outputs of many learning algorithms have undetermined distributions, the probability of their privacy leakages is hard to measure. A common approach to tackle this issue is to inject randomness with known probability distribution to privatize the learning procedures. Classical methods include output perturbation (Chaudhuri et al., 2011), objective perturbation (Chaudhuri et al., 2011) and gradient perturbation (Abadi et al., 2016; Bassily et al., 2014; Wu et al., 2017). Among these approaches, the Private Gradient Descent (PGD) has attracted extensive attention in recent years because it can be flexibly integrated with variants of gradient-based iteration methods, e.g., stochastic gradient descent, momentum methods (Qian, 1999), and Adam (Kingma & Ba, 2014), for both convex and non-convex problems.
Dynamic Policies for Privacy Protection. Wang et al. (2017) studied the empirical risk minimization using dynamic variation reduction of perturbed gradients. They showed that the utility upper bound can be achieved by gradient methods under uniform noise parameters. Instead of enhancing the gradients, Yu et al. (2019); Lee & Kifer (2018) showed the benefits of using a dynamic schedule of privacy parameters or equivalently noise scales. In addition, adaptive sensitivity control (Pichapati et al., 2019; Thakkar et al., 2019) and dynamic batch sizes (Feldman et al., 2020) are also demonstrated to improves the convergence.
Utility Upper Bounds. A utility upper bound is a critical metric for privacy schedules that characterizes the maximum utility that a schedule can deliver in theory. Wang et al. (2017) is the first to prove the utility bound under the PL condition. In this paper, we improve the upper bound by a more accurate estimation of the dynamic influence of step noise. Remarkably, by introducing a dynamic schedule, we further boost the sample-efficiency of the upper bound. With a similar intuition, Feldman et al. (2020) proposed to gradually increase the batch size, which reduces the dependence
on sample size accordingly. Recently, Zhou et al. proved the utility bound by using the momentum of gradients (Polyak, 1964; Kingma & Ba, 2014). Table 1 summarizes the upper bounds of methods studied in this paper (in the last block of rows) and results from state-of-the-art algorithms based on private gradients. Our work shows that considering the dynamic influence can lead to a tighter bound.
3 PRIVATE GRADIENT DESCENT
Notations. We consider a learning task by empirical risk minimization (ERM) f(θ) = 1 N ∑N n=1 f(θ;xn) on a private dataset {xn}Nn=1 and θ ∈ RD. The gradient methods are defined as
θt+1 = θt − ηt∇t, where ∇t = ∇f(θt) = 1N ∑ n∇f(θt;xn) denotes the non-private gradient at iteration t, ηt is the step learning rate. ∇(n)t = ∇f(θt;xn) denotes the gradient on a sample xn. Ic denotes the indicator function that returns 1 if the condition c holds, otherwise 0.
Assumptions. (1) In this paper, we assume f(θ) is continuous and differentiable. Many commonly used loss functions satisfy this assumption, e.g., the logistic function. (2) For a learning task, only finite amount of privacy cost is allowed where the maximum cost is called privacy budget and denoted as R. (3) Generally, we assume that loss functions f(θ;x) (sample-wise loss) are G-Lipschitz continuous and f(θ) (the empirical loss) is M -smooth. Definition 3.1 (G-Lipschitz continuity). A function f(·) is G-Lipschitz continuous if, for G > 0 and all x, y in the domain of f(·), f(·) satisfies ‖f(y)− f(x)‖ ≤ G‖y − x‖2. . Definition 3.2 (m-strongly convexity). A function f(·) is m-strongly convex if f(y) ≥ f(x) + ∇f(x)T (y − x) + m2 ‖y − x‖
2, for some m > 0 and all x, y in the domain of f(·). Definition 3.3 (M -smoothness). A function is M -smooth w.r.t. l2 norm if f(y) ≤ f(x) + ∇f(x)T (y − x) + M2 ‖y − x‖ 2, for some constant M > 0 and all x, y in the domain of f(·).
For a private algorithmM(d) which maps a dataset d to some output, the privacy cost is measured by the bound of the output difference on the adjacent datasets. Adjacent datasets are defined to be datasets that only differ in one sample. In this paper, we use the zero-Concentrated Differential Privacy (zCDP, see Definition 3.4) as the privacy measurement, because it provides the simplicity and possibility of adaptively composing privacy costs at each iteration. Various privacy metrics are discussed or reviewed in (Desfontaines & Pejó, 2019). A notable example is Moment Accoutant (MA) (Abadi et al., 2016), which adopts similar principle for composing privacy costs while is less tight for a smaller privacy budget. We note that alternative metrics can be adapted to our study without major impacts to the analysis. Definition 3.4 (ρ-zCDP (Bun & Steinke, 2016)). Let ρ > 0. A randomized algorithmM : Dn → R satisfies ρ-zCDP if, for all adjacent datasets d, d′ ∈ Dn, Dα(M(d)‖M(d′)) ≤ ρα, ∀α ∈ (1,∞) where Dα(·‖·) denotes the Rényi divergence (Rényi, 1961) of order α.
zCDP provides a linear composition of privacy costs of sub-route algorithms. When the input vector is privatized by injecting Gaussian noise of N (0, σ2t I) for the t-th iteration, the composed privacy cost is proportional to ∑ t ρt where the step cost is ρt =
1 σ2t . For simplicity, we absorb the constant coefficient into the (residual) privacy budget R. The formal theorems for the privacy cost computation of composition and Gaussian noising is included in Lemmas B.1 and B.2.
Generally, we define the Private Gradient Descent (PGD) method as iterations for t = 1 . . . T : θt+1 = θt − ηtφt = θt − ηt(∇t + σtGνt/N), (1)
where φt = gt is the gradient privatized from ∇t as shown in Algorithm 1, G/N is the bound of sensitivity of the gathered gradient excluding one sample gradient, and νt ∼ N (0, I) is a vector element-wisely subject to Gaussian distribution. We use σt to denote the noise scale at step t and use σ to collectively represents the schedule (σ1, . . . , σT ) if not confusing. When the Lipschitz constant is unknown, we can control the upper bound by scaling the gradient if it is over some constant. The scaling operation is often called clipping in literatures since it clips the gradient norm at a threshold. After the gradient is noised, we apply a modification, φ(·), to enhance its utility. In this paper, we consider two types of φ(·):
φ(mt, gt) = gt (GD), φ(mt, gt) = [β(1− βt−1)mt + (1− β)gt]/(1− βt) (Momentum)
We now show that the PGD using Algorithm 1 guarantees a privacy cost less than R:
Algorithm 1 Privatizing Gradients
Input: Raw gradients [∇(1)t , . . . ,∇ (n) t ] (n = N by default), vt, residual privacy budget Rt assuming the full budget is R and R1 = R.
1: ρt ← 1/σ2t ,∇t ← 1n ∑n i=1∇ (i) t . Budget request 2: if ρt < Rt then 3: Rt+1 ← Rt − ρt 4: gt ← ∇t +Gσtνt/N , νt ∼ N (0, I) . Privacy noise 5: mt+1 ← φ(mt, gt) or g1 if t = 1 6: return ηtmt+1, Rt+1 . Utility projection 7: else 8: Terminate
Theorem 3.1. Suppose f(θ;x) is G-Lipschitz continuous and the PGD algorithm with privatized gradients defined by Algorithm 1, stops at step T . The PGD algorithm outputs θT and satisfies ρ-zCDP where ρ ≤ 12R.
Note that Theorem 3.1 allows σt to be different throughout iterations. Next we present a principled approach for deriving dynamic schedules optimized for the final loss f(θT ).
4 DYNAMIC POLICIES BY MINIMIZING UTILITY UPPER BOUNDS
To characterize the utility of the PGD, we adopt the Expected Excess Risk (EER), which notion is widely used for analyzing the convergence of random algorithms, e.g., (Bassily et al., 2014; Wang et al., 2017). Due to the presence of the noise and the limitation of learning iterations, optimization using private gradients is expected to reach a point with a higher loss (i.e., excess risk) as compared to the optimal solution without private protection. Define θ∗ = arg minθ f(θ), after Algorithm 1 is iterated for T times in total, the EER gives the expected utility degradation:
EER = Eν [f(θT+1)]− f(θ∗).
Due to the variety of loss function and complexity of recursive iterations, an exact EER with noise is intractable for most functions. Instead, we study the worst case scenario, i.e., the upper bound of the EER, and our goal is to minimize the upper bound. For consistency, we call the upper bound of EER divided by the initial error as ERUB. Since the analytical form of EER is either intractable or complicated due to the recursive iterations of noise, studying the ERUB is a convenient and tractable alternative. The upper bound often has convenient functional forms which are (1) sufficiently simple, such that we can directly minimize it, and (2) closely related to the landscape of the objective depending on both the training dataset and the loss function. As a consequence, it is also used in previous PGD literature (Pichapati et al., 2019; Wang et al., 2017) for choosing proper parameters. Moreover, we let ERUBmin be the achievable optimal upper bound by a specific choice of parameters, e.g., the σ and T .
In this paper, we consider the class of loss functions satisfying the Polyak-Lojasiewicz (PL) condition which bounds losses by corresponding gradient norms. It is more general than the m-strongly convexity. If f is differentiable and M -smooth, then m-strongly convexity implies the PL condition.
Definition 4.1 (Polyak-Lojasiewicz condition (Polyak, 1963)). For f(θ), there exists µ > 0 and for every θ, ‖∇f(θ)‖2 ≥ 2µ(f(θ)− f(θ∗)).
The PL condition helps us to reveal how the influence of step noise propagates to the final excess error, i.e., EER. Though the assumption was also used previously in Wang et al. (2017); Zhou et al. (2020), neither did they discuss the propagated influence of noise. In the following sections, we will show how the influence can tighten the upper bound in gradient descent and its momentum variant.
4.1 GRADIENT DESCENT METHODS
For the brevity of our discussion, we first define the following constants: 1
α ,
2RMN2
DG2 (f(θ1)− f(θ∗)), κ ,
M µ , and γ , 1− 1 κ , (2)
which satisfy κ ≥ 1 and γ ∈ [0, 1). Note that κ is the condition number of f(·) if f(·) is strongly convex. κ tends to be large if the function is sensitive to small differences in inputs, and 1/α tends to be large if more samples are provided and with a less strict privacy budget. The convergence of PGD under the PL condition has been studied for private (Wang et al., 2017) and non-private (Karimi et al., 2016; Nesterov & Polyak, 2006; Reddi et al., 2016) ERM. Below we extend the bound in (Wang et al., 2017) by considering dynamic influence of noise and relax σt to be dynamic: Theorem 4.1. Let α, κ and γ be defined in Eq. (2), and ηt = 1M . Suppose f(θ;xi) is G-Lipschitz and f(θ) is M -smooth satisfying the Polyak-Lojasiewicz condition. For PGD, the following holds:
ERUB = γT +R ∑T
t=1 qtσ
2 t , where qt , γ T−tα. (3)
In Eq. (3), the step noise magnitude σ2t has an exponential influence, qt, on the EER. The dynamic characteristic of the influence is the key to prove a tighter bound. Plus, on the presence of the dynamic influence, it is natural to choose a dynamic σ2t . When relaxing qt to a static 1, a static σ 2 t was studied by Wang et al. They proved a bound which is nearly optimal except a ln2N factor. To get the optimal bound, in the following sections, we look for the σ and T that minimize the upper bound.
4.1.1 UNIFORM SCHEDULE
The uniform setting of σt has been previously studied in Wang et al. (2017). Here, we show that the bound can be further tightened by considering the dynamic influence of iterations and a proper T . Theorem 4.2. Suppose conditions in Theorem 4.1 are satisfied. When σ2t = T/R, let α, γ and κ be defined in Eq. (2) and let T be:
T = ⌈ O ( κ ln ( 1 + 1
κα
))⌉ . (4)
Meanwhile, if κ ≥ 11−c > 1, 1/α > 1/α0 for some constant c ∈ (0, 1) and α0 > 0, the corresponding bound is:
ERUBuniformmin = Θ
( κ2
κ+ 1/α ln
( 1 + 1
κα
)) . (5)
Sketch of proof. The key of proof is to find a proper T to minimize ERUB = E = γT + ∑T
t=1 γT−tαRσ2 = γT + αT
1− γT
1− γ = γT + ακ(1− γT )T
where we use σt = √ T/R. Vanishing its gradient is to solve γT ln γ+ακ(1−γT )−ακTγT ln γ = 0, which however is intractable. In (Wang et al., 2017), T is chosen to be O(ln(1/α)) and ERUB is relaxed as γT + ακT 2. The approximation results in a less tight bound as O(α(1 + κ ln2(1/α))) which explodes as κ→∞. We observe that for a super sharp loss function, i.e., a large κ, any minor perturbation may result in tremendously fluctuating loss values. In this case, not-stepping-forward will be a good choice. Thus, we choose T = 1ln(1/γ) ln ( 1 + ln(1/γ)α ) ≤ O ( κ ln ( 1 + 1κα )) which converges to 0 as κ → +∞. The full proof is deferred to the appendix.
4.1.2 DYNAMIC SCHEDULE
A dynamic schedule can improve the upper bound delivered by the uniform schedule. First, we observe that the excess risk in Eq. (3) is upper bounded by two terms: the first term characterizes the error due to the finite iterations of gradient descents; the second term, a weighted sum, comes from error propagated from noise at each iteration. Now we show for any {qt|qt > 0, t = 1, . . . , T} (not limited to the qt defined in Eq. (3)), there is a unique σt minimizing the weighted sum:
Lemma 4.1 (Dynamic schedule). Suppose σt satisfy ∑T t=1 σ
−2 = R. Given a positive sequence {qt}, the following equation holds:
min σ R ∑T t=1 qtσ 2 t = (∑T t=1 √ qt )2 , when σ2t = 1 R ∑T i=1 √ qi qt . (6)
Remarkably, the difference between the minimum and T ∑T t=1 qt (uniform σt) monotonically in-
creases by the variance of √ qt w.r.t. t.
We see that the dynamics in σt come from the non-uniform nature of the weight qt. Since qt presents the impact of the σt on the final error, we denote it as influence. Given the dynamic schedule in Eq. (6), it is of our interest to which extent the ERUB can be improved. First, we present Theorem 4.3 to show the optimal T and ERUB. Theorem 4.3. Suppose conditions in Theorem 4.1 are satisfied. Let α, κ and γ be defined in Eq. (2). When ηt = 1M , σt (based on Eqs. (3) and (6)) and the T minimizing ERUB are, i.e.,
σ2t = 1
R √ (1/γ)T − 1 1−√γ √ γt, T = ⌈( 2κ ln ( 1 + 1 κα ))⌉ . (7)
Meanwhile, when κ ≥ 1 and 1/α ≥ 1/α0 for some positive constant α0, the minimal bound is:
ERUBdynamicmin = Θ
( κ2
κ2 + 1/α
) . (8)
4.1.3 DISCUSSION
In Theorems 4.2 and 4.3, we present the tightest bounds for functions satisfying the PL condition, to our best knowledge. We further analyze the advantages of our bounds from two aspects: sample efficiency and robustness to sharp losses.
Sample efficiency. Since dataset cannot be infinitely large, it is critical to know how accurate the model can be trained privately with a limited number of samples. Formally, it is of interest to study when κ is fixed and N is large enough such that α 1. Then we have the upper bound in Eq. (5) as
ERUBuniformmin ≤ O ( κ2α ln ( 1
κα
)) ≤ Õ ( DG2 ln(N)
MN2R
) , (9)
where we ignore κ and other logarithmic constants with Õ as done in Wang et al. (2017). As a result, we get a bound very similar to (Wang et al., 2017), except that R is replaced by RMA = 2/ ln(1/δ) using Moment Accountant. In comparison, based on Lemma B.3, R = 2ρ = 2 + 4 ln(1/δ) + 4 √ ln(1/δ)( + ln(1/δ) if θT satisfies ρ-zCDP. Because ln(1/δ) > 1, it is easy to see R = RzCDP > RMA when ≤ 2 ln(1/δ). As compared to the one reported in (Wang et al., 2017), our bound saved a factor of lnN and thus is require less sample to achieve the same accuracy. Remarkably, the saving is due to the maintaining of the influence terms as shown in the proof of Theorem 4.2.
Using the dynamic schedule, we have ERUBdynamicmin ≤ O(α) = O ( DG2 MN2R ) , which saved another lnN factor in comparison to the one using the uniform schedule Eq. (9). As shown in Table 1, such advantage maintains when comparing with other baselines.
Robustness. Besides sample efficiency, we are also interested in robustness of the convergence under the presence of privacy noise. Because of the privacy noise, the convergence of private gradient descent will be unable to reach an ideal spot. Specifically, when the samples are noisy or have noisy labels, the loss curvature may be sharp. The sharpness also implies lower smoothness, i.e., a small M or has a very small PL parameter. Thus, gradients may change tremendously at some steps especially in the presence of privacy noise. As illustrated in the left figure, the highly-curved loss function (the green curve) results in mean higher final loss (the red dashed line) than the flatten curve (purple and blue lines). Such changes have more critical impact when only a less number of iterations can be executed due to the privacy constraint. Assume α is some constant while κ 1/α, we immediately get:
ERUBuniformmin = Θ
( κ ln ( 1 + 1
κα
)) = Θ ( 1
α
) ≤ O ( MN2R
DG2
) , ERUBdynamicmin = Θ(1).
Both are robust, but the dynamic schedule has a smaller factor since 1/α could be a large number. In addition, the factor implies that when more samples are used, the dynamic schedule is robuster.
4.2 GRADIENT DESCENT METHODS WITH MOMENTUM
Section 4.1 shows that the step noise has an exponentially increasing influence on the final loss, and therefore a decreasing noise magnitude improves the utility upper bound by a lnN factor. However, the proper schedule can be hard to find when the curvature information, e.g., κ, is absent. A parameterized method that less depends on the curvature information is preferred. On the other hand, long-term iterations will result in forgetting of the initial iterations, since accumulated noise overwhelmed the propagated information from the beginning. This effect will reduce the efficiency of the recursive learning frameworks.
Alternative to GD, the momentum method can mitigate the two issues. It was originally proposed to stabilize the gradient estimation (Polyak, 1964). In this section, we show that momentum (agnostic about the curvature) can flatten the dynamic influence and improve the utility upper bound. Previously, Pichapati et al. used the momentum as an estimation of gradient mean, without discussions of
convergence improvements. Zhou et al. gave a bound for the Adam with DP. However, the derivation is based on gradient norm, which results in a looser bound (see Table 1).
The momentum method stabilizes gradients by moving average history coordinate values and thus greatly reduces the variance. The φ(mt, gt) can be rewritten as:
mt+1 = φ(mt, gt) = vt+1
1− βt , vt+1 = βvt + (1− β)gt = (1− β) ∑t i=1 βt−igt, v1 = 0, (10)
where β ∈ [0, 1]. Note vt+1 is a biased estimation of the gradient expectation while mt+1 is unbiased. Theorem 4.4 (Convergence under PL condition). Suppose f(θ;xi) is G-Lipschitz, and f(θ) is M - smooth and satisfies the Polyak-Lojasiewicz condition. Assume β 6= γ and β ∈ (0, 1). Let ηt = η02M and η0 ≤ 8 (√ 1 + 64βγ(γ − β)−2(1− β)−3 + 1 )−1 . Then the following holds:
EER ≤ ( γT + 2Rη0α U3(σ, T )︸ ︷︷ ︸
noise varinace
) (f(θ1)− f(θ∗))− ζ
η0 2M ∑T t=1
γT−tE ‖vt+1‖2︸ ︷︷ ︸ momentum effect
(11)
where γ = 1− η0 κ , ζ = 1− 1 β(1− β)3 η20 − 1 4 η0 ≥ 0, (12)
U3 = ∑T
t=1 γT−t
(1− β)2 (1− βt)2 ∑t i=1 β2(t−i)σ2i . (13)
The upper bound includes three parts that influence the bound differently: (1) Convergence. The convergence term is mainly determined by η0 and κ. η0 should be in (0, κ) such that the upper bound can converge. A large η0 will be preferred to speed up convergence if it does not make the rest two terms worse. (2) Noise Variance. The second term compressed in U3 is the effect of the averaged noise, ∑t i=1 β
2(t−i)σ2i . One difference introduced by the momentum is the factor (1− β)/(1− βt) which is less than γt at the beginning and converges to a non-zero constant 1− β. Therefore, in U3, γT−t(1− β)/(1− βt) will be constantly less than γT meanwhile. Furthermore, when t > T̂ , the moving average ∑t i=1 β
2(t−i)σ2i smooths the influence of each σt. In Appendix D, we will see that the influence dynamics is less steep than that of GD. (3) Momentum Effect. The momentum effect term can improve the upper bound when η0 is small. For example, when β = 0.9 and γ = 0.99, then η0 ≤ 0.98/M which is a rational value. Following the analysis, when M is large which means the gradient norms will significantly fluctuate, the momentum term may take the lead. Adjusting the noise scale in this case may be less useful for improving utility.
To give an insight on the effect of dynamic schedule, we provide the following utility bounds.
Theorem 4.5 (Uniform schedule). Suppose the assumptions in Theorem 4.4 are satisfied. Let σ2t = T/R, and let:
T̂ = max t s.t. γt−1 ≥ 1− β 1− βt , T =
⌈ O ( κ η0 ln ( 1 + η0 κα ))⌉ .
Given some positive constant c and α0 > 0 with 1/α > 1/α0, the following inequality holds: ERUBmin ≤ O ( κ2
κ+ η0/α
[ IT≤T̂ + γ T̂−1 ln ( 1 + η0 κα ) IT>T̂ ]) .
Theorem 4.6 (Dynamic schedule). Suppose the assumptions in Theorem 4.4 are satisfied. Let α′ = 2η0αγ(1−γβ2) , β < γ and T̂ = max t s.t. γ t−1 ≥ 1−β1−βt . Use the following schedule:
σ2t = 1
R ∑T i=1 √ qi qt , T dyn = ⌈ O ( 2κ η0 ln ( 1 + η0 κα ))⌉ ,
where qt = c1γT+tIT≤T̂ + γ T̂−1c2γ T−tIT>T̂ for some positive constants c1 and c2. The following inequality holds:
ERUB ≤ γT + 2η0α ∑T
t=1 Rqtσ
2 t , ERUBmin ≤ O
( κα
κα+ η0
( κα
κα+ η0 IT≤T̂ + IT>T̂
)) .
Discussion. Theoretically, the dynamic schedule is more influential in vanilla gradient descent methods than the momentum variant. The result is mainly attributed to the averaging operation. The moving averaging, (1 − β) ∑t i=1 β
t−igi/(1 − βt), increase the influence of the under-presented initial steps and decrease the one of the over-sensitive last steps. Counterintuitively, the preferred dynamic schedule should be increasing since qt decreases when t ≤ T̂ .
4.3 PRIVATE STOCHASTIC GRADIENT DESCENT (NEW SECTION ON REBUTTAL)
Though PGD provides a guarantee both for utility and privacy, computing gradients of the whole dataset is impractical for large-scale problems. For this sake, studying the convergence of Private Stochastic Gradient Descent (PSGD) is meaningful. The Algorithm 1 can be easily extended to PSGD by subsampling n gradients where the batch size n N . According to (Yu et al., 2019), when privacy is measured by zCDP, there are two ways to account for the privacy cost of PSGD depending on the batch-sampling method: sub-sampling with or without replacement. In this paper, we focus on the random subsampling with replacement since it is widely used in deep learning in literature, e.g., (Abadi et al., 2016; Feldman et al., 2020). Accordingly, we replace N in the definition of α by n because the term is from the sensitivity of batch data (see Eq. (1)). For clarity, we assume that T is the number of iterations rather than epochs and that ∇̃t is mean stochastic gradient. When a batch of data are randomly sampled, the privacy cost of one iteration is cp2/σt where c is some constant, p = n/N is the sample rate, and 1/σ2t is the full-batch privacy cost. Details of the sub-sampling theorems are referred to the Theorem 3 of (Yu et al., 2019) and their empirical setting. Threfore, we can replace the privacy constraint ∑ t p 2/σ2t = R by ∑ t 1/σ 2 t = R
′ where R′ = R/p2 = N 2
n2 R. Remarkably, we omit the constant c because it will not affect the results regarding uniform or dynamic schedules. Notice N2R in the α is replaced by n2R′ = N2R. Thus, the form of α is not changed which provides convenience for the following derivations.
Now we study the utility bound of PSGD. To quantify the randomness of batch sampling, we define a random vector ξt with E[ξt] = 0 and E ‖ξt‖2 ≤ D such that ∇̃t ≤ ∇t + σgξt/n for some positive constant σg. Because ξt has similar property to the privacy noise νt, we can easily extend the PGD bounds to PSGD bounds by following theories. Theorem 4.7 (Utility bounds of PSGD). Let α, κ and γ be defined in Eq. (2), and ηt = 1M . Suppose f(θ;xi) is G-Lipschitz and f(θ) is M -smooth satisfying the Polyak-Lojasiewicz condition. For PSGD, when batch size satisfies n = max{N √ R, 1}, the following holds:
ERUB = γT + αgσ 2 g +R
′ ∑T
t=1 qtσ
2 t , where qt , γ T−tα, ∑ t 1/σ2t = R ′. (14)
where αg = D2µN2R(f(θ1)−f(θ∗)) .
Theorem 4.8 (PSGD with momentum). Let αg = D2µN2R(f(θ1)−f(θ∗)) . Suppose assumptions in Theorem 4.4 holds. When batch size satisfies n = max{N √ R, 1}, the U3(σ, T ) has to be replaced by Ũ3 = U g 3 + U3, with αR
′Ug3 ≤ αgσ2g (15) when PSGD is used.
As shown above, the utility bound of PSGD differs from the PGD merely by αgσ2g . Note αg = O( DN2R ) which fits the order of dynamic-schedule bounds. In addition, α and other variables are not changed. Hence, the conclusions w.r.t. the dynamic/uniform schedules maintain the same.
5 EXPERIMENTS
We empirically validate the properties of privacy schedules and their connections to learning algorithms. In this section, we briefly review the schedule behavior on quadratic losses under varying data sensitivity. Details of experimental setups and empirical results are available in Appendix D.
We first show the estimated influence of step noise qt (by retraining the private learning algorithms, ref. Appendix D) in Fig. 2 Left. We see the trends of influence are approximately in an exponential form of t. This obvervation motivates the use of exponential decay schedule in practice.
We then show the trends on the variance of influence (dashed lines with the right axis) and relative final losses (solid lines with the left axis) in the Middle Pane, where uni denotes the uniform schedule baseline, exp is an exponential schedule, dyn denotes the dynamic schedule minimizing the ERUB. The influences increases steeply when the data scale is large and therefore have a large variance. Meanwhile, dynamic schedules show improvements of the final loss when the variance is large. It reveals the connection between the influence and the dynamic advantage (refer to Lemma 4.1).
We lastly evaluate the impacts from momentum in the Right Pane, using a Deep Neural Network (DNN) with 2 layers and 100 hidden units. Because of time costs of training deep networks, we do not estimate the influence by retraining and then compute schedules. Instead, we grid-search for the schedule hyper-parameters to find the best one. We see that influence modeled by an exponential function (expinfl) has comparable performance of the influence modeled by linear combination of two reverse exponential functions (momexpinfl). The latter only shows advantage in the setting that data scale is 25 and the number of iteration is only 100, which is expected by our analysis Theorems 4.5 and 4.6. The inherent reason is that the dynamic schedule is more effective when T is larger.
6 CONCLUSION
When a privacy budget is provided for a certain learning task, one has to carefully schedule the privacy usage through the learning process. Uniformly scheduling the budget has been widely used in literature whereas increasing evidence suggests that dynamically schedules could empirically outperform the uniform one. This paper provided a principled analysis on the problem of optimal budget allocation and connected the advantages of dynamic schedules to both the loss structure and the learning behavior. We further validated our results through empirical studies.
A COMPARISON OF ALGORITHMS
We present Table 2 as an sumpplementary to the Table 1. Asymptotic upper bounds are achieved when sample size N approaches infinity. Both R and R ,δ with R ,δ < R are the privacy budgets of corresponding algorithms. Specifically, R ,δ = 2/ ln(1/δ) < R when the private algorithm is ( , δ)-DP with ≤ 2 ln(1/δ). PGD+Adv. Adv denotes the Advanced Composition method (Bassily et al., 2014). The method assumes that loss function is 1-strongly convex which implies the PL condition and optimized variable is in a convex set of diameter 1 w.r.t. l2 norm.
PGD+MA. MA denotes the Moment Accoutant (Abadi et al., 2016) which improve the composed privacy bound versus the Advanced Composition. The improvement on privacy bound lead to a enhanced utility bound, as a result.
PGD+Adv+BBImp. The dynamic method assumes that the loss is 1-strongly convex and data comes in stream with n ≤ N samples at each round. Their utility upper bound is achieved at some probability p with any positive c.
Adam+MA. The authors prove a convergence bound for the gradient norms which is extended to loss bound by using PL condition. They also presents the results for AdaGrad and GD which are basically of the same upper bound. Out theorems improve their bound by using the recursive derivation based on the PL condition, while their bound is a simple application of the condition on the gradient norm bound.
GD, Non-Private. This method does not inject noise into gradients but limit the number of iterations. With the bound, we can see that our utility bound are optimal with dynamic schedule.
GD+zCDP. We discussed the static and dynamic schedule for the gradient descent method where the dynamic noise influence is the key to tighten the bound.
Momemtum+zCDP. Different from the GD+zCDP, momentum methods will have two phase of utility upper bound. When T is small than some positive constant T̂ , the bound is as tight as the non-private one. Afterwards, the momentum has a bound degraded as the GD bound.
A.1 COMAPRISON OF GENERALIZATION BOUNDS
In addition to the empirical risk bounds in Table 2, in this section we study the true risk bounds, or generalization error bounds. True risk bounds characterize how well the learnt model can generalize to unseen samples subject to the inherent data distribution. By leveraging the generic learning-theory tools, we extend our results to the True Excess Risk (TER) for strongly convex functions as follows. For a model θ, its TER is defined as follows:
TER , Ex∼X [E[f(θ;x)]]−minθ̂ Ex∼X [f(θ̂;x)], where the second expectation is over the randomness of generating θ (e.g., the noise and stochastic batches). Assume a dataset d consist of N samples drawn i.i.d. from the distribution X . Two approaches could be used to extend the empirical bounds to the true excess risk: One is proposed by Shalev-Shwartz et al. (2009) where the true excess risk of PGD can be bounded in high probability. For example, Bassily et al. (2014) achieved a ln
2N N bound with N 2 iterations. Alternatively, instead of relying on the probabilistic bound, Bassily et al. (2019) used the uniform stability to give a tighter bound. Later, Feldman et al. (2020) improve the efficiency of gradient computation to achieve a similar bound. Both approaches introduce an additive term to the empirical bounds. In this section, we adopt both approaches to investigate the two types of resulting true risk bounds.
(1) True Risk in High Probability. First, we consider the high-probability true risk bound. Based on Section 5.4 from (Shalev-Shwartz et al., 2009) (restated in Theorem A.1), we can relate the EER to the TER. Theorem A.1. Let f(θ;x) be G-Lipschitz, and f(θ) be µ-strong convex loss function given any x ∈ X . With probability at least 1− p over the randomness of sampling the data set d, the following inequality holds:
TER(θ) ≤
√ 2G2
µN
√ f(θ)− f(θ∗) + 4G 2
pµN , (16)
where θ∗ = arg minθ f(θ).
To apply the Eq. (16), we need to extend EER, the expectation bound, to a high-probability bound. Following (Bassily et al., 2014) (Section D), we repeate the PGD with privacy budgetR/k for k times. Note, the output of all repetitions is still of R budget. When k = 1, let the EER of the algorithm be denoted as F (R). Then the EER of one execution of the k repetitions is F (R/k) where privacy is accounted by zCDP. When k = log2(1/p) for p ∈ [0, 1], by Markov’s inequality, there exists one repetition whose EER is F (R/ log2(1/p)) with probability at least 1 − 1/2k = 1 − p. Combined with Eq. (16), we use the bounds of uniform schedule and dynamic schedules in Section 4.1.3 to obtain:
TERuniform ≤ Õ
( G2
µN
(√ D ln(N) ln(1/p)
NR +
4
p
)) , (17)
TERdynamic ≤ Õ
( G2
µN
(√ D ln(1/p)
NR +
4
p
)) , (18)
where we again ignore the κ and other constants. Similarly, we can extend the momentum methods.
(2) True Risk by Unfirom Stability. Following Bassily et al. (2019), we use the uniform stability (defined in Definition A.1) to extend the empirical bounds. We restate the related definition and theorems as follows. Definition A.1 (Uniform stability). Let s > 0. A randomized algorithm M : DN → Θ is suniformly stable w.r.t. the loss function f if for any neighor datasets d and d′, we have: supx∈X E[f(M(d);x)− f(M(d′);x)] ≤ s, where the expectation is over the internal randomness ofM. Theorem A.2 (See, e.g., (Shalev-Shwartz & Ben-David, 2014)). Suppose M : DN → Θ is a s-uniformly stable algorithm w.r.t. the loss function f . LetD be any distribution from over data space and let d ∼ DN . The following holds true.
Ed∼DN [E[f(M(d);D)− f(M(d); d)]] ≤ s, where the second expectation is over the internal randomness ofM. f(M(d);D) and f(M(d); d) represent the true loss and the empirical loss, respecitvely. Theorem A.3 (Uniform stability of PGD from (Bassily et al., 2019)). Suppose η < 2/M for M smooth, G-Lipschitz f(θ;x). Then PGD is s-uniformly stable with s = G2Tη/N .
Combining Theorems A.2 and A.3, we obtain the following:
TER ≤ EER +G2 ηT N .
Because EER in this paper compresses a γT or similar exponential terms, unlike (Bassily et al., 2019), we cannot directly minimize the TER upper bound w.r.t. T and η in the presence of a polynomial form of γT and T . Therefore, we still use T = O(ln N
2R D ) and η for minimizing EER. Note that
G2 ηT N ≤ O( G
2 MN ln N2R D ) ≤ O
( G2
M ) where we assume N D and use lnN ≤ N . Because the term O ( G2/M ) is constant and independent from dimension, we follow (Bassily et al., 2019) to drop the term when comparing the bounds. After dropping the additive term, it is obvious to see that the advantage of dynamic schedules still maintains since TER ≤ EER. A similar extension can be derived for (Wang et al., 2017). We summarize the results and compare them to prior works in Table 3 where we include an additional method: Snowball Stochastic Gradient Descent (SSGD). SSGD dynamically schedule the batch size to achieve an optimal convergence rate in linear time.
Discussion. By using uniform stability, we successfully transfer the advantage of our dynamic schedules from empirical bounds to true risk bounds. The inherent reason is that our bounds only need lnN iterations to reach the preferred final loss. With uniform stability, the logarithmic T reduce the gap caused by transferring. Compared to the (Feldman et al., 2020; Bassily et al., 2019), our method has remarkably improved efficiency in T from N or N2 to ln(N). That implies fewer iterations are required for converging to the same generalization error.
B PRELIMINARIES
B.1 PRIVACY
Lemma B.1 (Composition & Post-processing). Let two mechanisms be M : Dn → Y and M ′ : Dn × Y → Z . Suppose M satisfies (ρ1, a)-zCDP and M ′(·, y) satisfies (ρ2, a)-zCDP for ∀y ∈ Y . Then, mechanism M ′′ : Dn → Z (defined by M ′′(x) = M ′(x,M(x))) satisfies (ρ1 + ρ2)-zCDP. Definition B.1 (Sensitivity). The sensitivity of a gradient query∇t to the dataset {xi}Ni=1 is
∆2(∇t) = max n ∥∥∥∥ 1N ∑Nj=1,j 6=n∇(j)t − 1N ∑Nj=1∇(j)t ∥∥∥∥ 2 = 1 N max n ∥∥∥∇(n)t ∥∥∥ 2
(19)
where∇(n)t denotes the gradient of the n-th sample. Lemma B.2 (Gaussian mechanism (Bun & Steinke, 2016)). Let f : Dn → Z have sensitivity ∆. Define a randomized algorithm M : Dn → Z by M(x)← f(x) +N (0,∆2σ2I). Then M satisfies 1 2σ2 -zCDP. Lemma B.3 ((Bun & Steinke, 2016)). If M is a mechanism satisfying ρ-zCDP, then M is (ρ + 2 √ ρ ln(1/δ), δ)-DP for any δ > 0. By solving ρ+ 2 √ ρ ln(1/δ) = , we can get ρ = + 2 ln(1/δ) + 2 √ ln(1/δ)( + ln(1/δ).
B.2 AUXILIARY LEMMAS
Lemma B.4. If maxn ‖xn‖2 = 1 and 1 N ∑ n xn = 0, then the gradient sensitivity of the squared loss will be
∆2(∇) = max i
1
N
√ 2f(θ;xi) ‖xi‖2 ≤ 1
2 (DM ‖θ‖2 + 1),
where ΘM is the set of all possible parameters θt generated by the learning algorithmM.
Proof. According to the definition of sensitivity in Eq. (19), we have
∆2(∇) = max i ∥∥∥∇(i)∥∥∥ 2 = max n 1 n ∥∥∥A(i)θ − xi∥∥∥ 2
where we use i denotes the index of sample in the dataset. Here, we assume it is constant 1. We may get ∥∥∥A(i)θ − xi∥∥∥2 2 = ∥∥xi(x>i θ − 1)∥∥22 = (x>i θ − 1)2 ‖xi‖22 = 2f(θ;xi) ‖xi‖22
where f(θ;xi) = 12 (x > i θ − 1)2. Thus,
∆2(∇) = max i
1
N
√ 2f(θ;xi) ‖xi‖2
Since ‖xn‖2 ≤ 1 and 1 N ∑N n=1 xn = 0,
f(θ) = 1
2N N∑ n=1 [(x>n θ) 2 − 2x>n θ + 1] ≤ 1 2N N∑ n=1 [(‖xn‖ ‖θ‖)2 + 1] ≤ 1 2 (DM ‖θ‖2 + 1)
Lemma B.5. Assume assumptions in Theorem 4.4 are satisfied. Given variables defined in Theorem 4.4, the following inequality holds true:
T∑ t=1 γT−t 2(1− β)ηt bt ∑t i=1 βt−i ‖∇t −∇i‖2 ≤ η30βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−i ‖vi+1‖2 .
Proof. We first handle the inner summation. By smoothness, the inequality ‖∇f(x)−∇f(y)‖ ≤ M ‖x− y‖ holds true. Thus,
∑t i=1 βt−i ‖∇t −∇i‖2 ≤M2 ∑t i=1 βt−i ‖θt − θi‖2
= M2 ∑t−1
k=0 βk ‖θt − θt−k‖2
= M2 ∑t−1 k=0 βk ∥∥∥∥∑t−1i=t−k ηivi+1/bi ∥∥∥∥2 ≤M2 ∑t−1 k=0 βk (∑t−1 j=t−k η2j /b 2 j )(∑t−1 i=t−k ‖vi+1‖2 )
where the last inequality is by Cauchy-Schwartz inequality. Because 1bt = 1 1−βt ≤ 1 1−β and ηt = η0 2M ,
∑t i=1 βt−i ‖∇t −∇i‖2 ≤ η20 4(1− β)2 ∑t−1 k=0 βkk ∑t−1 i=t−k ‖vi+1‖2
= η20 4(1− β)2 ∑t−1 k=0 βkk ∑t−1 i=1 ‖vi+1‖2 I(i ≥ t− k)
= η20 4(1− β)2 ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=0 βkkI(k ≥ t− i)
= η20 4(1− β)2 ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i βkk (20)
where I(·) is the indicating function which output 1 if the condition holds true, otherwise 0. Denote the left-hand-side of the conclusion as LHS. We plug Eq. (20) into LHS to get
LHS ≤ T∑ t=1 γT−t 1 bt η30 4M(1− β) ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i βkk
≤ η 3 0 4M(1− β)2 T∑ t=1 γT−t ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i βkk
where we relax the upper bound by 1bt = 1 1−βt ≤ 1 1−β . Using Lemma B.6 can directly lead to the conclusion:
LHS ≤ η 3 0βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−i ‖vi+1‖2 .
Lemma B.6. Given variables defined in Theorem 4.4, the following inequality holds true:
T∑ t=1 γT−t ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i kβk ≤ 2βγ (γ − β)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2 .
Proof. We first derive the summation:
U1(t, i) , ∑t−1
k=t−i βkk = ∑t−1 k=t−i ∑k j=1 βk
= ∑t−1
k=t−i ∑t−1 j=1 βkI(j ≤ k)
= ∑t−1
j=1 ∑t−1 k=max(t−i,j) βk
= ∑t−1
j=1
βmax(t−i,j) − βt
1− β
= 1
1− β
( (t− i)βt−i + β t−i+1 − βt
1− β − β − β t 1− β ) = 1
1− β
( (t− i)βt−i + β
t−i+1 − β 1− β
)
Now, we substitute U1(t, i) into LHS and replace t− i by j, i.e., t = j + i, to get
LHS = T∑ t=1 γT−t t−1∑ i=1 ‖vi+1‖2 1 1− β ( (t− i)βt−i + β t−i+1 − β 1− β )
= T−1∑ i=1 ‖vi+1‖2 T∑ t=i+1 γT−t 1 1− β ( (t− i)βt−i + β t−i+1 − β 1− β )
= T−1∑ i=1 ‖vi+1‖2 T−i∑ j=1 γT−(j+i) 1 1− β ( jβj + βj+1 − β 1− β )
= T−1∑ i=1 γT−i ‖vi+1‖2 T−i∑ j=1 γ−j 1 1− β ( jβj + βj+1 − β 1− β )
≤ 1 1− β T−1∑ i=1 γT−i ‖vi+1‖2 T−i∑ j=1
( j ( β
γ
)j + β
1− β
( β
γ
)j)
Let a = β/γ, we show
T−i∑ j=1 jaj = T−i∑ j=1 j∑ o=1 aj
= T−i∑ o=1 T−i∑ j=o aj
= T−i∑ o=1 ( ao − aT−i+1 1− a ) = a− aT−i+1
(1− a)2 − (T − i)a
T−i+1
1− a
≤ a (1− a)2 .
Thus,
LHS ≤ 1 1− β T−1∑ i=1
γT−i ‖vi+1‖2 a
(1− a)2 +
β
1− β T−i∑ j=1 aj ≤ 1
1− β T−1∑ i=1 γT−i ‖vi+1‖2 (
a
(1− a)2 +
β 1− β a 1− a
)
≤ a (1− a)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2
Because γ < 1, β < a = β/γ and
a
(1− a)2 +
β 1− β a 1− a ≤ 2a (1− a)2 .
Therefore,
LHS ≤ 2a (1− a)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2 = 2βγ (γ − β)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2
Lemma B.7. Suppose γ ∈ (0, 1) and β ∈ (0, 1). Define
T̂ = max t s.t. γt−1 ≥ 1− β 1− βt .
If t ≤ T̂ , 1−β1−βt ≤ γ t−1 for t = 1, . . . , T . If t > T̂ , 1−β1−βt < γ T̂−1.
Proof. Define h(t) = γt−1(1− βt) whose derivatives are
h′(t) = γt−1(1− βt) ln γ + γt−1(−βt) lnβ = γt−1 [ ln γ − βt(ln γ + lnβ) ] = γt−1 [ 1− βt(1 + logγ β) ] ln γ.
Simple calculation shows 1− βt(1 + logγ β) ∣∣ t=0
= − logγ β < 0 and limt→+∞ 1 − βt(1 + logγ β) = 1. When t = − logβ(1 + logγ β) denoted as t0, 1 − βt(1 + logγ β) = 0. Because 1 − βt(1 + logγ β) is monotonically increasing by t and γt−1 ln γ is negative, h′(t) ≥ 0 if t ≤ t0. Otherwise, h′(t) < 0. Therefore, h(t) is a concave function. Because h(1) = 1− β and h(T̂ ) = γT̂−1(1 − βT̂ ) ≥ 1 − β > 0, h(t) ≥ 1 − β for t = 1, . . . , T̂ . Thus, for all t ∈ [1, T̂ ], we have 1−β1−βt ≤ γ t−1.
For t > T̂ , because 1−β1−βt monotonically increases by t, we have 1−β 1−βt < 1−β 1−βT̂ ≤ γT̂−1.
C PROOFS
Proof of Theorem 3.1. Because all sample gradient are G-Lipschitz continuous, the sensitivity of the averaged gradient is upper bounded by G/N . Based on Lemma B.2, the privacy cost of gt is 12σ2t 1.
Here, we make the output of each iteration a tuple of (θt+1, vt=1). For the 1st iteration, because θ1 does not embrace private information by random initialization, the mapping,[
v2 θ2
] = [ g1
θ1 − η1g1
] ,
1For brevity, when we say the privacy cost of some value, e.g., gradient, we actually refer to the cost of mechanism that output the value.
is ρ̂1-zCDP where ρ̂1 = 12σ2t .
Suppose the output of the t-th iteration, (θt, vt), is ρ̂t-zCDP. At each iteration, we have the following mapping (θt, vt)→ (θt+1, vt+1) defined as[
vt+1 θt+1
] = [ φ(vt, gt)
θt − ηtφ(vt, gt)
] .
Thus, the output tuple (θt+1, vt+1) is (ρ̂t + 12σ2t )-zCDP by Lemma B.1.
Thus, the recursion implies that (θT+1, vT+1) has privacy cost as
ρ̂T+1 = ρ̂T + 1
2σ2T = · · · = T∑ t=1 1 2σ2t = 1 2 T∑ t=1 ρt ≤ 1 2 (R−RT ) ≤ 1 2 R.
Let ρ = ρ̂T+1. Then we can get the conclusion.
C.1 GRADIENT DESCENTS
Proof of Theorem 4.1. With the definition of smoothness in Definition 3.3 and Eq. (1), we have
f(θt+1)− f(θt) ≤ −ηt∇>t (∇t +Gσtνt/N) + 1
2 Mη2t ‖∇t +Gσtνt/N‖
2
= −ηt(1− 1
2 Mηt) ‖∇t‖2 − (1−Mηt)ηt∇>t Gσtνt/N +
1 2 Mη2t ‖Gσtνt/N‖ 2
≤ −2µηt(1− 1
2 Mηt)(f(θt)− f(θ∗))− (1−Mηt)ηt∇>t Gσtνt/N
+ 1
2 Mη2t ‖Gσtνt/N‖ 2 .
where the last inequality is due to the Polyak-Lojasiewicz condition. Taking expectation on both sides, we can obtain
E[f(θt+1)]− E[f(θt)] ≤ −2µηt(1− M
2 ηt)(E[f(θt)]− f(θ∗)) +
M
2 (ηtGσt/N)
2E ‖νt‖2
which can be reformulated by substacting f(θ∗) on both sides and re-arranged as E[f(θt+1)]− f(θ∗) ≤ ( 1− 2µηt(1− M
2 ηt)
) (E[f(θt)]− f(θ∗)) + M
2 (ηtGσt/N)
2D
Recursively using the inequality, we can get
E[f(θT+1)]− f(θ∗) ≤ T∏ t=1 ( 1− 2µηt(1− M 2 ηt) ) (E[f(θ1)]− f(θ∗))
+ MD
2 T∑ t=1 T∏ i=t+1 ( 1− 2µηi(1− M 2 ηi) ) (ηtGσt/N) 2.
Let ηt ≡ 1/M . Then the above inequality can be simplified as
E[f(θT+1)]− f(θ∗) ≤ γT (E[f(θ1)]− f(θ∗)) +R T∑ t=1 γT−t MD 2R ( ηtG N )2 σ2t
= γT (E[f(θ1)]− f(θ∗)) +R T∑ t=1 γT−tασ2t (E[f(θ1)]− f(θ∗))
= ( γT +R ∑T t=1 qtσ 2 t ) (f(θ1)− f(θ∗))
Proof of Theorem 4.2. The minimizer of the upper bound of Eq. (3) can be written as
T ∗ = arg min T γT + ακ(1− γT )T (21)
where we substitute σ2 = T/R in the second line. To find the convex minimization problem, we need to vanishing its gradient which involves an equation like TγT = c for some real constant c. However, the solution is Wk(c) for some integer k where W is Lambert W function which does not have a simple analytical form. Instead, because γT > 0, we can minimize a surrogate upper bound as following
T ∗ = arg min T
γT + ακT = 1
ln(1/γ) ln
( ln(1/γ)
κα
) , if κα+ ln γ < 0 (22)
where we use the surrogate upper bound in the second line and utilize γ = 1 − 1κ . However, the minimizer of the surrogate objective is not optimal for the original objective. When κ is large, the term, −ακγTT , cannot be neglected as we expect. On the other hands, T suffers from explosion if κ→∞ and meanwhile 1/γ →+ 1. The tendency is counterintuitive since a small T should be taken for sharp losses. To fix the issue, we change the form of T ∗ as
T ∗ = 1
ln(1/γ) ln
( 1 + ln(1/γ)
α
) , (23)
which gradually converges to 0 as κ→∞. Now we substitute Eq. (23) into the original objective function, Eq. (21), to get
ERUBuniform = 1
1 + ln(1/γ)/α
[ 1 + κ ln ( 1 + ln(1/γ)
α
)] . (24)
Notice that
ln(1/γ) = ln(κ/(κ− 1)) = ln(1 + 1/(κ− 1)) ≤ 1 κ− 1 ≤ 1 cκ
because κ ≥ 11−c > 1 for some constant c ∈ (0, 1). In addition,
ln(1/γ) = − ln(1− 1/κ) ≥ 1/κ.
Now, we can get the upper bound of Eq. (24) as
ERUBuniform ≤ κ κ+ 1/α
[ 1 + κ ln ( 1 + 1
cκα )] ≤ c1 κ
κ+ 1/α κ
[ ln ( 1 + 1
κα
) + ln( 1
c )) ] ≤ c1c2 κ2
κ+ 1/α ln
( 1 + 1
κα ) for some constants c1, c2 and large enough 1α . Also, we can get the lower bound
ERUBuniform ≥ cκ cκ+ 1/α
[ 1 + κ ln ( 1 + 1
κα
)] ≥ c κ 2
κ+ 1/α ln
( 1 + 1
κα
) .
where we use the condition c ∈ (0, 1). Thus, ERUBuniform = Θ ( κ2 κ+1/α ln ( 1 + 1κα )) .
Proof of Lemma 4.1. By ∑T t=1 σ
−2 = R and Cauchy-Schwarz inequality, we can derive the achievable lower bound as
R ∑ t qtσ 2 t = ∑ t 1 σ2t ∑ t qtσ 2 t ≥ ( T∑ t=1 √ qt )2
where the inequality becomes equality if and only if s/σ2t = qtσ 2 t , i.e., σt = (s/qt) 1/4, for some positive constant s. The equality ∑T t=1 σ −2 t = R immediately suggests √ s = 1R ∑T t=1 √ qt. Thus, we get the σt.
Notice
T T∑ t=1 qt − (∑T t=1 √ qt )2 = T 2 1 T T∑ t=1 ( √ qt − 1 T T∑ i=1 √ qi )2 = T 2 Var[qt] (25)
where the variance is w.r.t. t.
Proof of Theorem 4.3. The upper bound of Eq. (3) can be written as
ERUBdyn = γT + ∑T
t=1 γT−tαRσ2
= γT + α (∑T t=1 √ γT−t )2 = γT + α ( 1− γT/2
1−√γ )2 where we make use of Lemma 4.1. Then, the minimizer of the ERUB is
T ∗ = arg min T γT + α
( 1− γT/2
1−√γ )2 = 2 logγ ( α
α+ (1−√γ)2
) . (26)
We can substitute Eq. (26) into ERUBdyn to get
ERUBdynmin =
( α
α+ (1−√γ)2
)2 + α ( 1
1−√γ
)2( 1− α
α+ (1−√γ)2 )2 = ( α(1−√γ)−2
α(1−√γ)−2 + 1
)2 +
α(1−√γ)−2( α(1−√γ)−2 + 1 )2 = α(1−√γ)−2
α(1−√γ)−2 + 1
Notice that ( 1−√γ )−2 = κ2 + κ2 − κ + 2κ √ κ(κ− 1) = κ(2κ − 1 + 2 √ κ(κ− 1)) and it is bounded by
κ(2κ− 1 + 2 √ κ(κ− 1)) ≤ 4κ2,
κ(2κ− 1 + 2 √ κ(κ− 1)) ≥ κ(2κ− (3κ− 2) + 2 √ (κ− 1)(κ− 1)) = κ(−κ+ 2 + 2κ− 2) = κ2. Therefore, κ ≤ ( 1−√γ
)−1 ≤ 2κ, with which we can derive ERUBdynmin ≤ 4 κ2α
κ2α+ 1 ,
ERUBdynmin ≥ κ2α 4κ2α+ 1 ≥ 1 4 κ2α κ2α+ 1 .
Thus, ERUBdynmin = Θ ( κ2α κ2α+1 ) .
C.2 GRADIENT DESCENTS WITH MOMENTUM
Proof of Theorem 4.4. Without loss of generality, we absorb the Cσt/N into the variance of νt such that νt ∼ N (0, Cσ 2 t N I) and gt ← ∇t + νt. Define bt = 1− β t.
By smoothness and Eq. (1), we have
f(θt+1)− f(θt) ≤ ∇>t (θt+1 − θt) + 1
2 M ‖θt+1 − θt‖2
= −ηt b2t bt∇>t vt+1 + 1 2 M η2t b2t ‖vt+1‖2
= ηt b2t
( ‖bt∇t − vt+1‖2 − ‖bt∇t‖2 − ‖vt+1‖2 ) + 1
2 M η2t b2t ‖vt+1‖2
= ηt b2t ‖bt∇t − vt+1‖2︸ ︷︷ ︸
U1(t)
−ηt ‖∇t‖2 − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2 , (27)
where only the U1(t) is non-negative. Specifically, U1(t) describes the difference between current gradient and the average. We can expand vt+1 to get an upper bound:
U1(t) = ‖bt∇t − vt+1‖2 = ∥∥∥(1− β)∑t
i=1 βt−i∇t − (1− β) ∑t i=1 βt−igi ∥∥∥2 = (1− β)2 ∥∥∥∑t i=1 βt−i(∇t − gi) ∥∥∥2
= (1− β)2 ∥∥∥∑t
i=1 βt−i(∇t −∇i) + ∑t i=1 βt−i(∇i − gi) ∥∥∥2
≤ 2(1− β)2 [∥∥∥∑t
i=1 βt−i(∇t −∇i) ∥∥∥2 + ∥∥∥∑t i=1 βt−i(∇i − gi) ∥∥∥2]
≤ 2(1− β) bt∑ti=1 βt−i ‖∇t −∇i‖2︸ ︷︷ ︸ U2(t) (gradient variance) +(1− β) ∥∥∥∑t i=1 βt−iνi ∥∥∥2︸ ︷︷ ︸ noise variance
where we use ‖x+ y‖2 ≤ (‖x‖+ ‖y‖)2 ≤ 2(‖x‖2 + ‖y‖2). The last inequality can be proved by Cauchy-Schwartz inequality for each coordinate.
We plug the U1(t) into Eq. (27) and use the PL condition to get
f(θt+1)− f(θt) ≤ ηt b2t U1(t)− ηt ‖∇t‖2 − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2
≤ −ηt ‖∇t‖2 + ηt b2t
2(1− β) [ btU2(t) + (1− β) ∥∥∥∑t i=1 βt−iνi ∥∥∥2] − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2
≤ −2µηt(f(θt)− f(θ∗)) + 2(1− β)ηt
bt U2(t) + 2(1− β)2ηt b2t ∥∥∥∑t i=1 βt−iνi ∥∥∥2 − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2 .
Rearranging terms and taking expectation to show
E[f(θt+1)]− f(θ∗) ≤ γ(E[f(θt)]− f(θ∗)) + 2(1− β)2ηt
b2t
∑t i=1 βt−iE ‖νi‖2
+ 2(1− β)ηt
bt E[U2(t)]− ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2
= γ(E[f(θt)]− f(θ∗)) + 2(1− β)2ηt
b2t
∑t i=1 β2(t−i) C2Dσ2t N2
+ 2(1− β)ηt
bt E[U2(t)]− ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2
where γ = 1− η0/κ = 1− 2µηt. The recursive inequality implies
E[f(θT+1)]− f(θ∗) ≤ γT (f(θ1)− f(θ∗)) + T∑ t=1 γT−t 2(1− β)2ηt b2t ∑t i=1 β2(t−i) C2Dσ2t N2
+ T∑ t=1 γT−t 2(1− β)ηt bt E[U2(t)]− T∑ t=1 γT−t ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2
= ( γT + 2η0αR ∑T t=1 γT−t (1− β)2
b2t
∑t i=1
β2(t−i)σ2t︸ ︷︷ ︸ U3
) (f(θ1)− f(θ∗))
+ T∑ t=1 γT−t 2(1− β)ηt bt E[U2(t)]− T∑ t=1 γT−t ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2︸ ︷︷ ︸
U4(t)
.
where we utilize α = DC 2
2MN2R 1 f(θ1)−f(θ∗) and ηt = η0 2M .
By Lemma B.5, we have T∑ t=1 γT−t 2(1− β)ηt bt U2(t) ≤
η30βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−i ‖vi+1‖2 .
Thus, by 1bt ≥ 1,
U4(t) ≤ η30βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−iE ‖vi+1‖2 − η0 2M (1− η0 4 ) T∑ t=1 γT−tE ‖vt+ | 1. What is the focus of the paper regarding private gradient descent?
2. What are the strengths of the paper, particularly in engaging with prior work and technical analysis?
3. What are the weaknesses of the paper, especially regarding the excess risk upper bound and log(N) improvement?
4. Do you have any concerns or questions about the paper's analysis and its relation to previous works? | Review | Review
Strong Points:
The problem studied is interesting and important for the privacy community - private gradient descent is an important mechanism for private convex optimization, and one of the only mechanisms for non-convex optimization.
The authors do a good job engaging with prior work, identifying gaps in related work (i.e., theoretical justification for dynamic noise schedules) and putting their contributions in that context.
The analysis seems technically correct and pretty strong, although I did not check this closely.
Weak Points:
Excess risk upper bound is probably quite loose, so improving that may not necessarily improve actual excess risk.
log(N) improvement may be an artifact of the analysis, and not an inherent advantage of dynamic schedules. Needs more clarification (see below).
Other Notes:
It’s not clear to me where the log(N) term in the excess risk upper bound comes from for the uniform noise schedule. In other works on this topic, it is not there (e.g., https://arxiv.org/pdf/2005.04763.pdf) . Since we are dealing with upper bounds, I wonder if it is just an artifact of the analysis of Wang et al., rather than an improvement offered by dynamic scheduling. Are there other ways to remove this dependence other than dynamic noise? |
ICLR | Title
On Dynamic Noise Influence in Differential Private Learning
Abstract
Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. Private Gradient Descent (PGD) is a commonly used private learning framework, which noises gradients based on the Differential Privacy protocol. Recent studies show that dynamic privacy schedules of decreasing noise magnitudes can improve loss at the final iteration, and yet theoretical understandings of the effectiveness of such schedules and their connections to optimization algorithms remain limited. In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions. We first present a dynamic noise schedule minimizing the utility upper bound of PGD, and show how the noise influence from each optimization step collectively impacts utility of the final model. Our study also reveals how impacts from dynamic noise influence change when momentum is used. We empirically show the connection exists for general non-convex losses, and the influence is greatly impacted by the loss curvature.
1 INTRODUCTION
In the era of big data, privacy protection in machine learning systems is becoming a crucial topic as increasing personal data involved in training models (Dwork et al., 2020) and the presence of malicious attackers (Shokri et al., 2017; Fredrikson et al., 2015). In response to the growing demand, differential-private (DP) machine learning (Dwork et al., 2006) provides a computational framework for privacy protection and has been widely studied in various settings, including both convex and non-convex optimization (Wang et al., 2017; 2019; Jain et al., 2019).
One widely used procedure for privacy-preserving learning is the (Differentially) Private Gradient Descent (PGD) (Bassily et al., 2014; Abadi et al., 2016). A typical gradient descent procedure updates its model by the gradients of losses evaluated on the training data. When the data is sensitive, the gradients should be privatized to prevent excess privacy leakage. The PGD privatizes a gradient by adding controlled noise. As such, the models from PGD is expected to have a lower utility as compared to those from unprotected algorithms. In the cases where strict privacy control is exercised, or equivalently, a tight privacy budget, accumulating effects from highly-noised gradients may lead to unacceptable model performance. It is thus critical to design effective privatization procedures for PGD to maintain a great balance between utility and privacy.
Recent years witnessed a promising privatization direction that studies how to dynamically adjust the privacy-protecting noise during the learning process, i.e., dynamic privacy schedules, to boost utility under a specific privacy budget. One example is (Lee & Kifer, 2018), which reduced the noise magnitude when the loss does not decrease, due to the observation that the gradients become very small when approaching convergence, and a static noise scale will overwhelm these gradients. Another example is (Yu et al., 2019), which periodically decreased the magnitude following a predefined strategy, e.g., exponential decaying or step decaying. Both approaches confirmed the empirically advantages of decreasing noise magnitudes. Intuitively, the dynamic mechanism may coordinate with certain properties of the learning task, e.g., training data and loss surface. Yet there is no theoretical analysis available and two important questions remain unanswered: 1) What is the form of utility-preferred noise schedules? 2) When and to what extent such schedules improve utility?
To answer these questions, in this paper we develop a principled approach to construct dynamic schedules and quantify their utility bounds in different learning algorithms. Our contributions
are summarized as follows. 1) For the class of loss functions satisfying the Polyak-Lojasiewicz condition (Polyak, 1963), we show that a dynamic schedule improving the utility upper bound is shaped by the influence of per-iteration noise on the final loss. As the influence is tightly connected to the loss curvature, the advantage of using dynamic schedule depends on the loss function consequently. 2) Beyond gradient descent, our results show the gradient methods with momentum implicitly introduce a dynamic schedule and result in an improved utility bound. 3) We empirically validate our results on convex and non-convex (no need to satisfy the PL condition) loss functions. Our results suggest that the preferred dynamic schedule admits the exponentially decaying form, and works better when learning with high-curvature loss functions. Moreover, dynamic schedules give more utility under stricter privacy conditions (e.g., smaller sample size and less privacy budget).
2 RELATED WORK
Differentially Private Learning. Differential privacy (DP) characterizes the chance of an algorithm output (e.g., a learned model) to leak private information in its training data when the output distribution is known. Since outputs of many learning algorithms have undetermined distributions, the probability of their privacy leakages is hard to measure. A common approach to tackle this issue is to inject randomness with known probability distribution to privatize the learning procedures. Classical methods include output perturbation (Chaudhuri et al., 2011), objective perturbation (Chaudhuri et al., 2011) and gradient perturbation (Abadi et al., 2016; Bassily et al., 2014; Wu et al., 2017). Among these approaches, the Private Gradient Descent (PGD) has attracted extensive attention in recent years because it can be flexibly integrated with variants of gradient-based iteration methods, e.g., stochastic gradient descent, momentum methods (Qian, 1999), and Adam (Kingma & Ba, 2014), for both convex and non-convex problems.
Dynamic Policies for Privacy Protection. Wang et al. (2017) studied the empirical risk minimization using dynamic variation reduction of perturbed gradients. They showed that the utility upper bound can be achieved by gradient methods under uniform noise parameters. Instead of enhancing the gradients, Yu et al. (2019); Lee & Kifer (2018) showed the benefits of using a dynamic schedule of privacy parameters or equivalently noise scales. In addition, adaptive sensitivity control (Pichapati et al., 2019; Thakkar et al., 2019) and dynamic batch sizes (Feldman et al., 2020) are also demonstrated to improves the convergence.
Utility Upper Bounds. A utility upper bound is a critical metric for privacy schedules that characterizes the maximum utility that a schedule can deliver in theory. Wang et al. (2017) is the first to prove the utility bound under the PL condition. In this paper, we improve the upper bound by a more accurate estimation of the dynamic influence of step noise. Remarkably, by introducing a dynamic schedule, we further boost the sample-efficiency of the upper bound. With a similar intuition, Feldman et al. (2020) proposed to gradually increase the batch size, which reduces the dependence
on sample size accordingly. Recently, Zhou et al. proved the utility bound by using the momentum of gradients (Polyak, 1964; Kingma & Ba, 2014). Table 1 summarizes the upper bounds of methods studied in this paper (in the last block of rows) and results from state-of-the-art algorithms based on private gradients. Our work shows that considering the dynamic influence can lead to a tighter bound.
3 PRIVATE GRADIENT DESCENT
Notations. We consider a learning task by empirical risk minimization (ERM) f(θ) = 1 N ∑N n=1 f(θ;xn) on a private dataset {xn}Nn=1 and θ ∈ RD. The gradient methods are defined as
θt+1 = θt − ηt∇t, where ∇t = ∇f(θt) = 1N ∑ n∇f(θt;xn) denotes the non-private gradient at iteration t, ηt is the step learning rate. ∇(n)t = ∇f(θt;xn) denotes the gradient on a sample xn. Ic denotes the indicator function that returns 1 if the condition c holds, otherwise 0.
Assumptions. (1) In this paper, we assume f(θ) is continuous and differentiable. Many commonly used loss functions satisfy this assumption, e.g., the logistic function. (2) For a learning task, only finite amount of privacy cost is allowed where the maximum cost is called privacy budget and denoted as R. (3) Generally, we assume that loss functions f(θ;x) (sample-wise loss) are G-Lipschitz continuous and f(θ) (the empirical loss) is M -smooth. Definition 3.1 (G-Lipschitz continuity). A function f(·) is G-Lipschitz continuous if, for G > 0 and all x, y in the domain of f(·), f(·) satisfies ‖f(y)− f(x)‖ ≤ G‖y − x‖2. . Definition 3.2 (m-strongly convexity). A function f(·) is m-strongly convex if f(y) ≥ f(x) + ∇f(x)T (y − x) + m2 ‖y − x‖
2, for some m > 0 and all x, y in the domain of f(·). Definition 3.3 (M -smoothness). A function is M -smooth w.r.t. l2 norm if f(y) ≤ f(x) + ∇f(x)T (y − x) + M2 ‖y − x‖ 2, for some constant M > 0 and all x, y in the domain of f(·).
For a private algorithmM(d) which maps a dataset d to some output, the privacy cost is measured by the bound of the output difference on the adjacent datasets. Adjacent datasets are defined to be datasets that only differ in one sample. In this paper, we use the zero-Concentrated Differential Privacy (zCDP, see Definition 3.4) as the privacy measurement, because it provides the simplicity and possibility of adaptively composing privacy costs at each iteration. Various privacy metrics are discussed or reviewed in (Desfontaines & Pejó, 2019). A notable example is Moment Accoutant (MA) (Abadi et al., 2016), which adopts similar principle for composing privacy costs while is less tight for a smaller privacy budget. We note that alternative metrics can be adapted to our study without major impacts to the analysis. Definition 3.4 (ρ-zCDP (Bun & Steinke, 2016)). Let ρ > 0. A randomized algorithmM : Dn → R satisfies ρ-zCDP if, for all adjacent datasets d, d′ ∈ Dn, Dα(M(d)‖M(d′)) ≤ ρα, ∀α ∈ (1,∞) where Dα(·‖·) denotes the Rényi divergence (Rényi, 1961) of order α.
zCDP provides a linear composition of privacy costs of sub-route algorithms. When the input vector is privatized by injecting Gaussian noise of N (0, σ2t I) for the t-th iteration, the composed privacy cost is proportional to ∑ t ρt where the step cost is ρt =
1 σ2t . For simplicity, we absorb the constant coefficient into the (residual) privacy budget R. The formal theorems for the privacy cost computation of composition and Gaussian noising is included in Lemmas B.1 and B.2.
Generally, we define the Private Gradient Descent (PGD) method as iterations for t = 1 . . . T : θt+1 = θt − ηtφt = θt − ηt(∇t + σtGνt/N), (1)
where φt = gt is the gradient privatized from ∇t as shown in Algorithm 1, G/N is the bound of sensitivity of the gathered gradient excluding one sample gradient, and νt ∼ N (0, I) is a vector element-wisely subject to Gaussian distribution. We use σt to denote the noise scale at step t and use σ to collectively represents the schedule (σ1, . . . , σT ) if not confusing. When the Lipschitz constant is unknown, we can control the upper bound by scaling the gradient if it is over some constant. The scaling operation is often called clipping in literatures since it clips the gradient norm at a threshold. After the gradient is noised, we apply a modification, φ(·), to enhance its utility. In this paper, we consider two types of φ(·):
φ(mt, gt) = gt (GD), φ(mt, gt) = [β(1− βt−1)mt + (1− β)gt]/(1− βt) (Momentum)
We now show that the PGD using Algorithm 1 guarantees a privacy cost less than R:
Algorithm 1 Privatizing Gradients
Input: Raw gradients [∇(1)t , . . . ,∇ (n) t ] (n = N by default), vt, residual privacy budget Rt assuming the full budget is R and R1 = R.
1: ρt ← 1/σ2t ,∇t ← 1n ∑n i=1∇ (i) t . Budget request 2: if ρt < Rt then 3: Rt+1 ← Rt − ρt 4: gt ← ∇t +Gσtνt/N , νt ∼ N (0, I) . Privacy noise 5: mt+1 ← φ(mt, gt) or g1 if t = 1 6: return ηtmt+1, Rt+1 . Utility projection 7: else 8: Terminate
Theorem 3.1. Suppose f(θ;x) is G-Lipschitz continuous and the PGD algorithm with privatized gradients defined by Algorithm 1, stops at step T . The PGD algorithm outputs θT and satisfies ρ-zCDP where ρ ≤ 12R.
Note that Theorem 3.1 allows σt to be different throughout iterations. Next we present a principled approach for deriving dynamic schedules optimized for the final loss f(θT ).
4 DYNAMIC POLICIES BY MINIMIZING UTILITY UPPER BOUNDS
To characterize the utility of the PGD, we adopt the Expected Excess Risk (EER), which notion is widely used for analyzing the convergence of random algorithms, e.g., (Bassily et al., 2014; Wang et al., 2017). Due to the presence of the noise and the limitation of learning iterations, optimization using private gradients is expected to reach a point with a higher loss (i.e., excess risk) as compared to the optimal solution without private protection. Define θ∗ = arg minθ f(θ), after Algorithm 1 is iterated for T times in total, the EER gives the expected utility degradation:
EER = Eν [f(θT+1)]− f(θ∗).
Due to the variety of loss function and complexity of recursive iterations, an exact EER with noise is intractable for most functions. Instead, we study the worst case scenario, i.e., the upper bound of the EER, and our goal is to minimize the upper bound. For consistency, we call the upper bound of EER divided by the initial error as ERUB. Since the analytical form of EER is either intractable or complicated due to the recursive iterations of noise, studying the ERUB is a convenient and tractable alternative. The upper bound often has convenient functional forms which are (1) sufficiently simple, such that we can directly minimize it, and (2) closely related to the landscape of the objective depending on both the training dataset and the loss function. As a consequence, it is also used in previous PGD literature (Pichapati et al., 2019; Wang et al., 2017) for choosing proper parameters. Moreover, we let ERUBmin be the achievable optimal upper bound by a specific choice of parameters, e.g., the σ and T .
In this paper, we consider the class of loss functions satisfying the Polyak-Lojasiewicz (PL) condition which bounds losses by corresponding gradient norms. It is more general than the m-strongly convexity. If f is differentiable and M -smooth, then m-strongly convexity implies the PL condition.
Definition 4.1 (Polyak-Lojasiewicz condition (Polyak, 1963)). For f(θ), there exists µ > 0 and for every θ, ‖∇f(θ)‖2 ≥ 2µ(f(θ)− f(θ∗)).
The PL condition helps us to reveal how the influence of step noise propagates to the final excess error, i.e., EER. Though the assumption was also used previously in Wang et al. (2017); Zhou et al. (2020), neither did they discuss the propagated influence of noise. In the following sections, we will show how the influence can tighten the upper bound in gradient descent and its momentum variant.
4.1 GRADIENT DESCENT METHODS
For the brevity of our discussion, we first define the following constants: 1
α ,
2RMN2
DG2 (f(θ1)− f(θ∗)), κ ,
M µ , and γ , 1− 1 κ , (2)
which satisfy κ ≥ 1 and γ ∈ [0, 1). Note that κ is the condition number of f(·) if f(·) is strongly convex. κ tends to be large if the function is sensitive to small differences in inputs, and 1/α tends to be large if more samples are provided and with a less strict privacy budget. The convergence of PGD under the PL condition has been studied for private (Wang et al., 2017) and non-private (Karimi et al., 2016; Nesterov & Polyak, 2006; Reddi et al., 2016) ERM. Below we extend the bound in (Wang et al., 2017) by considering dynamic influence of noise and relax σt to be dynamic: Theorem 4.1. Let α, κ and γ be defined in Eq. (2), and ηt = 1M . Suppose f(θ;xi) is G-Lipschitz and f(θ) is M -smooth satisfying the Polyak-Lojasiewicz condition. For PGD, the following holds:
ERUB = γT +R ∑T
t=1 qtσ
2 t , where qt , γ T−tα. (3)
In Eq. (3), the step noise magnitude σ2t has an exponential influence, qt, on the EER. The dynamic characteristic of the influence is the key to prove a tighter bound. Plus, on the presence of the dynamic influence, it is natural to choose a dynamic σ2t . When relaxing qt to a static 1, a static σ 2 t was studied by Wang et al. They proved a bound which is nearly optimal except a ln2N factor. To get the optimal bound, in the following sections, we look for the σ and T that minimize the upper bound.
4.1.1 UNIFORM SCHEDULE
The uniform setting of σt has been previously studied in Wang et al. (2017). Here, we show that the bound can be further tightened by considering the dynamic influence of iterations and a proper T . Theorem 4.2. Suppose conditions in Theorem 4.1 are satisfied. When σ2t = T/R, let α, γ and κ be defined in Eq. (2) and let T be:
T = ⌈ O ( κ ln ( 1 + 1
κα
))⌉ . (4)
Meanwhile, if κ ≥ 11−c > 1, 1/α > 1/α0 for some constant c ∈ (0, 1) and α0 > 0, the corresponding bound is:
ERUBuniformmin = Θ
( κ2
κ+ 1/α ln
( 1 + 1
κα
)) . (5)
Sketch of proof. The key of proof is to find a proper T to minimize ERUB = E = γT + ∑T
t=1 γT−tαRσ2 = γT + αT
1− γT
1− γ = γT + ακ(1− γT )T
where we use σt = √ T/R. Vanishing its gradient is to solve γT ln γ+ακ(1−γT )−ακTγT ln γ = 0, which however is intractable. In (Wang et al., 2017), T is chosen to be O(ln(1/α)) and ERUB is relaxed as γT + ακT 2. The approximation results in a less tight bound as O(α(1 + κ ln2(1/α))) which explodes as κ→∞. We observe that for a super sharp loss function, i.e., a large κ, any minor perturbation may result in tremendously fluctuating loss values. In this case, not-stepping-forward will be a good choice. Thus, we choose T = 1ln(1/γ) ln ( 1 + ln(1/γ)α ) ≤ O ( κ ln ( 1 + 1κα )) which converges to 0 as κ → +∞. The full proof is deferred to the appendix.
4.1.2 DYNAMIC SCHEDULE
A dynamic schedule can improve the upper bound delivered by the uniform schedule. First, we observe that the excess risk in Eq. (3) is upper bounded by two terms: the first term characterizes the error due to the finite iterations of gradient descents; the second term, a weighted sum, comes from error propagated from noise at each iteration. Now we show for any {qt|qt > 0, t = 1, . . . , T} (not limited to the qt defined in Eq. (3)), there is a unique σt minimizing the weighted sum:
Lemma 4.1 (Dynamic schedule). Suppose σt satisfy ∑T t=1 σ
−2 = R. Given a positive sequence {qt}, the following equation holds:
min σ R ∑T t=1 qtσ 2 t = (∑T t=1 √ qt )2 , when σ2t = 1 R ∑T i=1 √ qi qt . (6)
Remarkably, the difference between the minimum and T ∑T t=1 qt (uniform σt) monotonically in-
creases by the variance of √ qt w.r.t. t.
We see that the dynamics in σt come from the non-uniform nature of the weight qt. Since qt presents the impact of the σt on the final error, we denote it as influence. Given the dynamic schedule in Eq. (6), it is of our interest to which extent the ERUB can be improved. First, we present Theorem 4.3 to show the optimal T and ERUB. Theorem 4.3. Suppose conditions in Theorem 4.1 are satisfied. Let α, κ and γ be defined in Eq. (2). When ηt = 1M , σt (based on Eqs. (3) and (6)) and the T minimizing ERUB are, i.e.,
σ2t = 1
R √ (1/γ)T − 1 1−√γ √ γt, T = ⌈( 2κ ln ( 1 + 1 κα ))⌉ . (7)
Meanwhile, when κ ≥ 1 and 1/α ≥ 1/α0 for some positive constant α0, the minimal bound is:
ERUBdynamicmin = Θ
( κ2
κ2 + 1/α
) . (8)
4.1.3 DISCUSSION
In Theorems 4.2 and 4.3, we present the tightest bounds for functions satisfying the PL condition, to our best knowledge. We further analyze the advantages of our bounds from two aspects: sample efficiency and robustness to sharp losses.
Sample efficiency. Since dataset cannot be infinitely large, it is critical to know how accurate the model can be trained privately with a limited number of samples. Formally, it is of interest to study when κ is fixed and N is large enough such that α 1. Then we have the upper bound in Eq. (5) as
ERUBuniformmin ≤ O ( κ2α ln ( 1
κα
)) ≤ Õ ( DG2 ln(N)
MN2R
) , (9)
where we ignore κ and other logarithmic constants with Õ as done in Wang et al. (2017). As a result, we get a bound very similar to (Wang et al., 2017), except that R is replaced by RMA = 2/ ln(1/δ) using Moment Accountant. In comparison, based on Lemma B.3, R = 2ρ = 2 + 4 ln(1/δ) + 4 √ ln(1/δ)( + ln(1/δ) if θT satisfies ρ-zCDP. Because ln(1/δ) > 1, it is easy to see R = RzCDP > RMA when ≤ 2 ln(1/δ). As compared to the one reported in (Wang et al., 2017), our bound saved a factor of lnN and thus is require less sample to achieve the same accuracy. Remarkably, the saving is due to the maintaining of the influence terms as shown in the proof of Theorem 4.2.
Using the dynamic schedule, we have ERUBdynamicmin ≤ O(α) = O ( DG2 MN2R ) , which saved another lnN factor in comparison to the one using the uniform schedule Eq. (9). As shown in Table 1, such advantage maintains when comparing with other baselines.
Robustness. Besides sample efficiency, we are also interested in robustness of the convergence under the presence of privacy noise. Because of the privacy noise, the convergence of private gradient descent will be unable to reach an ideal spot. Specifically, when the samples are noisy or have noisy labels, the loss curvature may be sharp. The sharpness also implies lower smoothness, i.e., a small M or has a very small PL parameter. Thus, gradients may change tremendously at some steps especially in the presence of privacy noise. As illustrated in the left figure, the highly-curved loss function (the green curve) results in mean higher final loss (the red dashed line) than the flatten curve (purple and blue lines). Such changes have more critical impact when only a less number of iterations can be executed due to the privacy constraint. Assume α is some constant while κ 1/α, we immediately get:
ERUBuniformmin = Θ
( κ ln ( 1 + 1
κα
)) = Θ ( 1
α
) ≤ O ( MN2R
DG2
) , ERUBdynamicmin = Θ(1).
Both are robust, but the dynamic schedule has a smaller factor since 1/α could be a large number. In addition, the factor implies that when more samples are used, the dynamic schedule is robuster.
4.2 GRADIENT DESCENT METHODS WITH MOMENTUM
Section 4.1 shows that the step noise has an exponentially increasing influence on the final loss, and therefore a decreasing noise magnitude improves the utility upper bound by a lnN factor. However, the proper schedule can be hard to find when the curvature information, e.g., κ, is absent. A parameterized method that less depends on the curvature information is preferred. On the other hand, long-term iterations will result in forgetting of the initial iterations, since accumulated noise overwhelmed the propagated information from the beginning. This effect will reduce the efficiency of the recursive learning frameworks.
Alternative to GD, the momentum method can mitigate the two issues. It was originally proposed to stabilize the gradient estimation (Polyak, 1964). In this section, we show that momentum (agnostic about the curvature) can flatten the dynamic influence and improve the utility upper bound. Previously, Pichapati et al. used the momentum as an estimation of gradient mean, without discussions of
convergence improvements. Zhou et al. gave a bound for the Adam with DP. However, the derivation is based on gradient norm, which results in a looser bound (see Table 1).
The momentum method stabilizes gradients by moving average history coordinate values and thus greatly reduces the variance. The φ(mt, gt) can be rewritten as:
mt+1 = φ(mt, gt) = vt+1
1− βt , vt+1 = βvt + (1− β)gt = (1− β) ∑t i=1 βt−igt, v1 = 0, (10)
where β ∈ [0, 1]. Note vt+1 is a biased estimation of the gradient expectation while mt+1 is unbiased. Theorem 4.4 (Convergence under PL condition). Suppose f(θ;xi) is G-Lipschitz, and f(θ) is M - smooth and satisfies the Polyak-Lojasiewicz condition. Assume β 6= γ and β ∈ (0, 1). Let ηt = η02M and η0 ≤ 8 (√ 1 + 64βγ(γ − β)−2(1− β)−3 + 1 )−1 . Then the following holds:
EER ≤ ( γT + 2Rη0α U3(σ, T )︸ ︷︷ ︸
noise varinace
) (f(θ1)− f(θ∗))− ζ
η0 2M ∑T t=1
γT−tE ‖vt+1‖2︸ ︷︷ ︸ momentum effect
(11)
where γ = 1− η0 κ , ζ = 1− 1 β(1− β)3 η20 − 1 4 η0 ≥ 0, (12)
U3 = ∑T
t=1 γT−t
(1− β)2 (1− βt)2 ∑t i=1 β2(t−i)σ2i . (13)
The upper bound includes three parts that influence the bound differently: (1) Convergence. The convergence term is mainly determined by η0 and κ. η0 should be in (0, κ) such that the upper bound can converge. A large η0 will be preferred to speed up convergence if it does not make the rest two terms worse. (2) Noise Variance. The second term compressed in U3 is the effect of the averaged noise, ∑t i=1 β
2(t−i)σ2i . One difference introduced by the momentum is the factor (1− β)/(1− βt) which is less than γt at the beginning and converges to a non-zero constant 1− β. Therefore, in U3, γT−t(1− β)/(1− βt) will be constantly less than γT meanwhile. Furthermore, when t > T̂ , the moving average ∑t i=1 β
2(t−i)σ2i smooths the influence of each σt. In Appendix D, we will see that the influence dynamics is less steep than that of GD. (3) Momentum Effect. The momentum effect term can improve the upper bound when η0 is small. For example, when β = 0.9 and γ = 0.99, then η0 ≤ 0.98/M which is a rational value. Following the analysis, when M is large which means the gradient norms will significantly fluctuate, the momentum term may take the lead. Adjusting the noise scale in this case may be less useful for improving utility.
To give an insight on the effect of dynamic schedule, we provide the following utility bounds.
Theorem 4.5 (Uniform schedule). Suppose the assumptions in Theorem 4.4 are satisfied. Let σ2t = T/R, and let:
T̂ = max t s.t. γt−1 ≥ 1− β 1− βt , T =
⌈ O ( κ η0 ln ( 1 + η0 κα ))⌉ .
Given some positive constant c and α0 > 0 with 1/α > 1/α0, the following inequality holds: ERUBmin ≤ O ( κ2
κ+ η0/α
[ IT≤T̂ + γ T̂−1 ln ( 1 + η0 κα ) IT>T̂ ]) .
Theorem 4.6 (Dynamic schedule). Suppose the assumptions in Theorem 4.4 are satisfied. Let α′ = 2η0αγ(1−γβ2) , β < γ and T̂ = max t s.t. γ t−1 ≥ 1−β1−βt . Use the following schedule:
σ2t = 1
R ∑T i=1 √ qi qt , T dyn = ⌈ O ( 2κ η0 ln ( 1 + η0 κα ))⌉ ,
where qt = c1γT+tIT≤T̂ + γ T̂−1c2γ T−tIT>T̂ for some positive constants c1 and c2. The following inequality holds:
ERUB ≤ γT + 2η0α ∑T
t=1 Rqtσ
2 t , ERUBmin ≤ O
( κα
κα+ η0
( κα
κα+ η0 IT≤T̂ + IT>T̂
)) .
Discussion. Theoretically, the dynamic schedule is more influential in vanilla gradient descent methods than the momentum variant. The result is mainly attributed to the averaging operation. The moving averaging, (1 − β) ∑t i=1 β
t−igi/(1 − βt), increase the influence of the under-presented initial steps and decrease the one of the over-sensitive last steps. Counterintuitively, the preferred dynamic schedule should be increasing since qt decreases when t ≤ T̂ .
4.3 PRIVATE STOCHASTIC GRADIENT DESCENT (NEW SECTION ON REBUTTAL)
Though PGD provides a guarantee both for utility and privacy, computing gradients of the whole dataset is impractical for large-scale problems. For this sake, studying the convergence of Private Stochastic Gradient Descent (PSGD) is meaningful. The Algorithm 1 can be easily extended to PSGD by subsampling n gradients where the batch size n N . According to (Yu et al., 2019), when privacy is measured by zCDP, there are two ways to account for the privacy cost of PSGD depending on the batch-sampling method: sub-sampling with or without replacement. In this paper, we focus on the random subsampling with replacement since it is widely used in deep learning in literature, e.g., (Abadi et al., 2016; Feldman et al., 2020). Accordingly, we replace N in the definition of α by n because the term is from the sensitivity of batch data (see Eq. (1)). For clarity, we assume that T is the number of iterations rather than epochs and that ∇̃t is mean stochastic gradient. When a batch of data are randomly sampled, the privacy cost of one iteration is cp2/σt where c is some constant, p = n/N is the sample rate, and 1/σ2t is the full-batch privacy cost. Details of the sub-sampling theorems are referred to the Theorem 3 of (Yu et al., 2019) and their empirical setting. Threfore, we can replace the privacy constraint ∑ t p 2/σ2t = R by ∑ t 1/σ 2 t = R
′ where R′ = R/p2 = N 2
n2 R. Remarkably, we omit the constant c because it will not affect the results regarding uniform or dynamic schedules. Notice N2R in the α is replaced by n2R′ = N2R. Thus, the form of α is not changed which provides convenience for the following derivations.
Now we study the utility bound of PSGD. To quantify the randomness of batch sampling, we define a random vector ξt with E[ξt] = 0 and E ‖ξt‖2 ≤ D such that ∇̃t ≤ ∇t + σgξt/n for some positive constant σg. Because ξt has similar property to the privacy noise νt, we can easily extend the PGD bounds to PSGD bounds by following theories. Theorem 4.7 (Utility bounds of PSGD). Let α, κ and γ be defined in Eq. (2), and ηt = 1M . Suppose f(θ;xi) is G-Lipschitz and f(θ) is M -smooth satisfying the Polyak-Lojasiewicz condition. For PSGD, when batch size satisfies n = max{N √ R, 1}, the following holds:
ERUB = γT + αgσ 2 g +R
′ ∑T
t=1 qtσ
2 t , where qt , γ T−tα, ∑ t 1/σ2t = R ′. (14)
where αg = D2µN2R(f(θ1)−f(θ∗)) .
Theorem 4.8 (PSGD with momentum). Let αg = D2µN2R(f(θ1)−f(θ∗)) . Suppose assumptions in Theorem 4.4 holds. When batch size satisfies n = max{N √ R, 1}, the U3(σ, T ) has to be replaced by Ũ3 = U g 3 + U3, with αR
′Ug3 ≤ αgσ2g (15) when PSGD is used.
As shown above, the utility bound of PSGD differs from the PGD merely by αgσ2g . Note αg = O( DN2R ) which fits the order of dynamic-schedule bounds. In addition, α and other variables are not changed. Hence, the conclusions w.r.t. the dynamic/uniform schedules maintain the same.
5 EXPERIMENTS
We empirically validate the properties of privacy schedules and their connections to learning algorithms. In this section, we briefly review the schedule behavior on quadratic losses under varying data sensitivity. Details of experimental setups and empirical results are available in Appendix D.
We first show the estimated influence of step noise qt (by retraining the private learning algorithms, ref. Appendix D) in Fig. 2 Left. We see the trends of influence are approximately in an exponential form of t. This obvervation motivates the use of exponential decay schedule in practice.
We then show the trends on the variance of influence (dashed lines with the right axis) and relative final losses (solid lines with the left axis) in the Middle Pane, where uni denotes the uniform schedule baseline, exp is an exponential schedule, dyn denotes the dynamic schedule minimizing the ERUB. The influences increases steeply when the data scale is large and therefore have a large variance. Meanwhile, dynamic schedules show improvements of the final loss when the variance is large. It reveals the connection between the influence and the dynamic advantage (refer to Lemma 4.1).
We lastly evaluate the impacts from momentum in the Right Pane, using a Deep Neural Network (DNN) with 2 layers and 100 hidden units. Because of time costs of training deep networks, we do not estimate the influence by retraining and then compute schedules. Instead, we grid-search for the schedule hyper-parameters to find the best one. We see that influence modeled by an exponential function (expinfl) has comparable performance of the influence modeled by linear combination of two reverse exponential functions (momexpinfl). The latter only shows advantage in the setting that data scale is 25 and the number of iteration is only 100, which is expected by our analysis Theorems 4.5 and 4.6. The inherent reason is that the dynamic schedule is more effective when T is larger.
6 CONCLUSION
When a privacy budget is provided for a certain learning task, one has to carefully schedule the privacy usage through the learning process. Uniformly scheduling the budget has been widely used in literature whereas increasing evidence suggests that dynamically schedules could empirically outperform the uniform one. This paper provided a principled analysis on the problem of optimal budget allocation and connected the advantages of dynamic schedules to both the loss structure and the learning behavior. We further validated our results through empirical studies.
A COMPARISON OF ALGORITHMS
We present Table 2 as an sumpplementary to the Table 1. Asymptotic upper bounds are achieved when sample size N approaches infinity. Both R and R ,δ with R ,δ < R are the privacy budgets of corresponding algorithms. Specifically, R ,δ = 2/ ln(1/δ) < R when the private algorithm is ( , δ)-DP with ≤ 2 ln(1/δ). PGD+Adv. Adv denotes the Advanced Composition method (Bassily et al., 2014). The method assumes that loss function is 1-strongly convex which implies the PL condition and optimized variable is in a convex set of diameter 1 w.r.t. l2 norm.
PGD+MA. MA denotes the Moment Accoutant (Abadi et al., 2016) which improve the composed privacy bound versus the Advanced Composition. The improvement on privacy bound lead to a enhanced utility bound, as a result.
PGD+Adv+BBImp. The dynamic method assumes that the loss is 1-strongly convex and data comes in stream with n ≤ N samples at each round. Their utility upper bound is achieved at some probability p with any positive c.
Adam+MA. The authors prove a convergence bound for the gradient norms which is extended to loss bound by using PL condition. They also presents the results for AdaGrad and GD which are basically of the same upper bound. Out theorems improve their bound by using the recursive derivation based on the PL condition, while their bound is a simple application of the condition on the gradient norm bound.
GD, Non-Private. This method does not inject noise into gradients but limit the number of iterations. With the bound, we can see that our utility bound are optimal with dynamic schedule.
GD+zCDP. We discussed the static and dynamic schedule for the gradient descent method where the dynamic noise influence is the key to tighten the bound.
Momemtum+zCDP. Different from the GD+zCDP, momentum methods will have two phase of utility upper bound. When T is small than some positive constant T̂ , the bound is as tight as the non-private one. Afterwards, the momentum has a bound degraded as the GD bound.
A.1 COMAPRISON OF GENERALIZATION BOUNDS
In addition to the empirical risk bounds in Table 2, in this section we study the true risk bounds, or generalization error bounds. True risk bounds characterize how well the learnt model can generalize to unseen samples subject to the inherent data distribution. By leveraging the generic learning-theory tools, we extend our results to the True Excess Risk (TER) for strongly convex functions as follows. For a model θ, its TER is defined as follows:
TER , Ex∼X [E[f(θ;x)]]−minθ̂ Ex∼X [f(θ̂;x)], where the second expectation is over the randomness of generating θ (e.g., the noise and stochastic batches). Assume a dataset d consist of N samples drawn i.i.d. from the distribution X . Two approaches could be used to extend the empirical bounds to the true excess risk: One is proposed by Shalev-Shwartz et al. (2009) where the true excess risk of PGD can be bounded in high probability. For example, Bassily et al. (2014) achieved a ln
2N N bound with N 2 iterations. Alternatively, instead of relying on the probabilistic bound, Bassily et al. (2019) used the uniform stability to give a tighter bound. Later, Feldman et al. (2020) improve the efficiency of gradient computation to achieve a similar bound. Both approaches introduce an additive term to the empirical bounds. In this section, we adopt both approaches to investigate the two types of resulting true risk bounds.
(1) True Risk in High Probability. First, we consider the high-probability true risk bound. Based on Section 5.4 from (Shalev-Shwartz et al., 2009) (restated in Theorem A.1), we can relate the EER to the TER. Theorem A.1. Let f(θ;x) be G-Lipschitz, and f(θ) be µ-strong convex loss function given any x ∈ X . With probability at least 1− p over the randomness of sampling the data set d, the following inequality holds:
TER(θ) ≤
√ 2G2
µN
√ f(θ)− f(θ∗) + 4G 2
pµN , (16)
where θ∗ = arg minθ f(θ).
To apply the Eq. (16), we need to extend EER, the expectation bound, to a high-probability bound. Following (Bassily et al., 2014) (Section D), we repeate the PGD with privacy budgetR/k for k times. Note, the output of all repetitions is still of R budget. When k = 1, let the EER of the algorithm be denoted as F (R). Then the EER of one execution of the k repetitions is F (R/k) where privacy is accounted by zCDP. When k = log2(1/p) for p ∈ [0, 1], by Markov’s inequality, there exists one repetition whose EER is F (R/ log2(1/p)) with probability at least 1 − 1/2k = 1 − p. Combined with Eq. (16), we use the bounds of uniform schedule and dynamic schedules in Section 4.1.3 to obtain:
TERuniform ≤ Õ
( G2
µN
(√ D ln(N) ln(1/p)
NR +
4
p
)) , (17)
TERdynamic ≤ Õ
( G2
µN
(√ D ln(1/p)
NR +
4
p
)) , (18)
where we again ignore the κ and other constants. Similarly, we can extend the momentum methods.
(2) True Risk by Unfirom Stability. Following Bassily et al. (2019), we use the uniform stability (defined in Definition A.1) to extend the empirical bounds. We restate the related definition and theorems as follows. Definition A.1 (Uniform stability). Let s > 0. A randomized algorithm M : DN → Θ is suniformly stable w.r.t. the loss function f if for any neighor datasets d and d′, we have: supx∈X E[f(M(d);x)− f(M(d′);x)] ≤ s, where the expectation is over the internal randomness ofM. Theorem A.2 (See, e.g., (Shalev-Shwartz & Ben-David, 2014)). Suppose M : DN → Θ is a s-uniformly stable algorithm w.r.t. the loss function f . LetD be any distribution from over data space and let d ∼ DN . The following holds true.
Ed∼DN [E[f(M(d);D)− f(M(d); d)]] ≤ s, where the second expectation is over the internal randomness ofM. f(M(d);D) and f(M(d); d) represent the true loss and the empirical loss, respecitvely. Theorem A.3 (Uniform stability of PGD from (Bassily et al., 2019)). Suppose η < 2/M for M smooth, G-Lipschitz f(θ;x). Then PGD is s-uniformly stable with s = G2Tη/N .
Combining Theorems A.2 and A.3, we obtain the following:
TER ≤ EER +G2 ηT N .
Because EER in this paper compresses a γT or similar exponential terms, unlike (Bassily et al., 2019), we cannot directly minimize the TER upper bound w.r.t. T and η in the presence of a polynomial form of γT and T . Therefore, we still use T = O(ln N
2R D ) and η for minimizing EER. Note that
G2 ηT N ≤ O( G
2 MN ln N2R D ) ≤ O
( G2
M ) where we assume N D and use lnN ≤ N . Because the term O ( G2/M ) is constant and independent from dimension, we follow (Bassily et al., 2019) to drop the term when comparing the bounds. After dropping the additive term, it is obvious to see that the advantage of dynamic schedules still maintains since TER ≤ EER. A similar extension can be derived for (Wang et al., 2017). We summarize the results and compare them to prior works in Table 3 where we include an additional method: Snowball Stochastic Gradient Descent (SSGD). SSGD dynamically schedule the batch size to achieve an optimal convergence rate in linear time.
Discussion. By using uniform stability, we successfully transfer the advantage of our dynamic schedules from empirical bounds to true risk bounds. The inherent reason is that our bounds only need lnN iterations to reach the preferred final loss. With uniform stability, the logarithmic T reduce the gap caused by transferring. Compared to the (Feldman et al., 2020; Bassily et al., 2019), our method has remarkably improved efficiency in T from N or N2 to ln(N). That implies fewer iterations are required for converging to the same generalization error.
B PRELIMINARIES
B.1 PRIVACY
Lemma B.1 (Composition & Post-processing). Let two mechanisms be M : Dn → Y and M ′ : Dn × Y → Z . Suppose M satisfies (ρ1, a)-zCDP and M ′(·, y) satisfies (ρ2, a)-zCDP for ∀y ∈ Y . Then, mechanism M ′′ : Dn → Z (defined by M ′′(x) = M ′(x,M(x))) satisfies (ρ1 + ρ2)-zCDP. Definition B.1 (Sensitivity). The sensitivity of a gradient query∇t to the dataset {xi}Ni=1 is
∆2(∇t) = max n ∥∥∥∥ 1N ∑Nj=1,j 6=n∇(j)t − 1N ∑Nj=1∇(j)t ∥∥∥∥ 2 = 1 N max n ∥∥∥∇(n)t ∥∥∥ 2
(19)
where∇(n)t denotes the gradient of the n-th sample. Lemma B.2 (Gaussian mechanism (Bun & Steinke, 2016)). Let f : Dn → Z have sensitivity ∆. Define a randomized algorithm M : Dn → Z by M(x)← f(x) +N (0,∆2σ2I). Then M satisfies 1 2σ2 -zCDP. Lemma B.3 ((Bun & Steinke, 2016)). If M is a mechanism satisfying ρ-zCDP, then M is (ρ + 2 √ ρ ln(1/δ), δ)-DP for any δ > 0. By solving ρ+ 2 √ ρ ln(1/δ) = , we can get ρ = + 2 ln(1/δ) + 2 √ ln(1/δ)( + ln(1/δ).
B.2 AUXILIARY LEMMAS
Lemma B.4. If maxn ‖xn‖2 = 1 and 1 N ∑ n xn = 0, then the gradient sensitivity of the squared loss will be
∆2(∇) = max i
1
N
√ 2f(θ;xi) ‖xi‖2 ≤ 1
2 (DM ‖θ‖2 + 1),
where ΘM is the set of all possible parameters θt generated by the learning algorithmM.
Proof. According to the definition of sensitivity in Eq. (19), we have
∆2(∇) = max i ∥∥∥∇(i)∥∥∥ 2 = max n 1 n ∥∥∥A(i)θ − xi∥∥∥ 2
where we use i denotes the index of sample in the dataset. Here, we assume it is constant 1. We may get ∥∥∥A(i)θ − xi∥∥∥2 2 = ∥∥xi(x>i θ − 1)∥∥22 = (x>i θ − 1)2 ‖xi‖22 = 2f(θ;xi) ‖xi‖22
where f(θ;xi) = 12 (x > i θ − 1)2. Thus,
∆2(∇) = max i
1
N
√ 2f(θ;xi) ‖xi‖2
Since ‖xn‖2 ≤ 1 and 1 N ∑N n=1 xn = 0,
f(θ) = 1
2N N∑ n=1 [(x>n θ) 2 − 2x>n θ + 1] ≤ 1 2N N∑ n=1 [(‖xn‖ ‖θ‖)2 + 1] ≤ 1 2 (DM ‖θ‖2 + 1)
Lemma B.5. Assume assumptions in Theorem 4.4 are satisfied. Given variables defined in Theorem 4.4, the following inequality holds true:
T∑ t=1 γT−t 2(1− β)ηt bt ∑t i=1 βt−i ‖∇t −∇i‖2 ≤ η30βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−i ‖vi+1‖2 .
Proof. We first handle the inner summation. By smoothness, the inequality ‖∇f(x)−∇f(y)‖ ≤ M ‖x− y‖ holds true. Thus,
∑t i=1 βt−i ‖∇t −∇i‖2 ≤M2 ∑t i=1 βt−i ‖θt − θi‖2
= M2 ∑t−1
k=0 βk ‖θt − θt−k‖2
= M2 ∑t−1 k=0 βk ∥∥∥∥∑t−1i=t−k ηivi+1/bi ∥∥∥∥2 ≤M2 ∑t−1 k=0 βk (∑t−1 j=t−k η2j /b 2 j )(∑t−1 i=t−k ‖vi+1‖2 )
where the last inequality is by Cauchy-Schwartz inequality. Because 1bt = 1 1−βt ≤ 1 1−β and ηt = η0 2M ,
∑t i=1 βt−i ‖∇t −∇i‖2 ≤ η20 4(1− β)2 ∑t−1 k=0 βkk ∑t−1 i=t−k ‖vi+1‖2
= η20 4(1− β)2 ∑t−1 k=0 βkk ∑t−1 i=1 ‖vi+1‖2 I(i ≥ t− k)
= η20 4(1− β)2 ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=0 βkkI(k ≥ t− i)
= η20 4(1− β)2 ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i βkk (20)
where I(·) is the indicating function which output 1 if the condition holds true, otherwise 0. Denote the left-hand-side of the conclusion as LHS. We plug Eq. (20) into LHS to get
LHS ≤ T∑ t=1 γT−t 1 bt η30 4M(1− β) ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i βkk
≤ η 3 0 4M(1− β)2 T∑ t=1 γT−t ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i βkk
where we relax the upper bound by 1bt = 1 1−βt ≤ 1 1−β . Using Lemma B.6 can directly lead to the conclusion:
LHS ≤ η 3 0βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−i ‖vi+1‖2 .
Lemma B.6. Given variables defined in Theorem 4.4, the following inequality holds true:
T∑ t=1 γT−t ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i kβk ≤ 2βγ (γ − β)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2 .
Proof. We first derive the summation:
U1(t, i) , ∑t−1
k=t−i βkk = ∑t−1 k=t−i ∑k j=1 βk
= ∑t−1
k=t−i ∑t−1 j=1 βkI(j ≤ k)
= ∑t−1
j=1 ∑t−1 k=max(t−i,j) βk
= ∑t−1
j=1
βmax(t−i,j) − βt
1− β
= 1
1− β
( (t− i)βt−i + β t−i+1 − βt
1− β − β − β t 1− β ) = 1
1− β
( (t− i)βt−i + β
t−i+1 − β 1− β
)
Now, we substitute U1(t, i) into LHS and replace t− i by j, i.e., t = j + i, to get
LHS = T∑ t=1 γT−t t−1∑ i=1 ‖vi+1‖2 1 1− β ( (t− i)βt−i + β t−i+1 − β 1− β )
= T−1∑ i=1 ‖vi+1‖2 T∑ t=i+1 γT−t 1 1− β ( (t− i)βt−i + β t−i+1 − β 1− β )
= T−1∑ i=1 ‖vi+1‖2 T−i∑ j=1 γT−(j+i) 1 1− β ( jβj + βj+1 − β 1− β )
= T−1∑ i=1 γT−i ‖vi+1‖2 T−i∑ j=1 γ−j 1 1− β ( jβj + βj+1 − β 1− β )
≤ 1 1− β T−1∑ i=1 γT−i ‖vi+1‖2 T−i∑ j=1
( j ( β
γ
)j + β
1− β
( β
γ
)j)
Let a = β/γ, we show
T−i∑ j=1 jaj = T−i∑ j=1 j∑ o=1 aj
= T−i∑ o=1 T−i∑ j=o aj
= T−i∑ o=1 ( ao − aT−i+1 1− a ) = a− aT−i+1
(1− a)2 − (T − i)a
T−i+1
1− a
≤ a (1− a)2 .
Thus,
LHS ≤ 1 1− β T−1∑ i=1
γT−i ‖vi+1‖2 a
(1− a)2 +
β
1− β T−i∑ j=1 aj ≤ 1
1− β T−1∑ i=1 γT−i ‖vi+1‖2 (
a
(1− a)2 +
β 1− β a 1− a
)
≤ a (1− a)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2
Because γ < 1, β < a = β/γ and
a
(1− a)2 +
β 1− β a 1− a ≤ 2a (1− a)2 .
Therefore,
LHS ≤ 2a (1− a)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2 = 2βγ (γ − β)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2
Lemma B.7. Suppose γ ∈ (0, 1) and β ∈ (0, 1). Define
T̂ = max t s.t. γt−1 ≥ 1− β 1− βt .
If t ≤ T̂ , 1−β1−βt ≤ γ t−1 for t = 1, . . . , T . If t > T̂ , 1−β1−βt < γ T̂−1.
Proof. Define h(t) = γt−1(1− βt) whose derivatives are
h′(t) = γt−1(1− βt) ln γ + γt−1(−βt) lnβ = γt−1 [ ln γ − βt(ln γ + lnβ) ] = γt−1 [ 1− βt(1 + logγ β) ] ln γ.
Simple calculation shows 1− βt(1 + logγ β) ∣∣ t=0
= − logγ β < 0 and limt→+∞ 1 − βt(1 + logγ β) = 1. When t = − logβ(1 + logγ β) denoted as t0, 1 − βt(1 + logγ β) = 0. Because 1 − βt(1 + logγ β) is monotonically increasing by t and γt−1 ln γ is negative, h′(t) ≥ 0 if t ≤ t0. Otherwise, h′(t) < 0. Therefore, h(t) is a concave function. Because h(1) = 1− β and h(T̂ ) = γT̂−1(1 − βT̂ ) ≥ 1 − β > 0, h(t) ≥ 1 − β for t = 1, . . . , T̂ . Thus, for all t ∈ [1, T̂ ], we have 1−β1−βt ≤ γ t−1.
For t > T̂ , because 1−β1−βt monotonically increases by t, we have 1−β 1−βt < 1−β 1−βT̂ ≤ γT̂−1.
C PROOFS
Proof of Theorem 3.1. Because all sample gradient are G-Lipschitz continuous, the sensitivity of the averaged gradient is upper bounded by G/N . Based on Lemma B.2, the privacy cost of gt is 12σ2t 1.
Here, we make the output of each iteration a tuple of (θt+1, vt=1). For the 1st iteration, because θ1 does not embrace private information by random initialization, the mapping,[
v2 θ2
] = [ g1
θ1 − η1g1
] ,
1For brevity, when we say the privacy cost of some value, e.g., gradient, we actually refer to the cost of mechanism that output the value.
is ρ̂1-zCDP where ρ̂1 = 12σ2t .
Suppose the output of the t-th iteration, (θt, vt), is ρ̂t-zCDP. At each iteration, we have the following mapping (θt, vt)→ (θt+1, vt+1) defined as[
vt+1 θt+1
] = [ φ(vt, gt)
θt − ηtφ(vt, gt)
] .
Thus, the output tuple (θt+1, vt+1) is (ρ̂t + 12σ2t )-zCDP by Lemma B.1.
Thus, the recursion implies that (θT+1, vT+1) has privacy cost as
ρ̂T+1 = ρ̂T + 1
2σ2T = · · · = T∑ t=1 1 2σ2t = 1 2 T∑ t=1 ρt ≤ 1 2 (R−RT ) ≤ 1 2 R.
Let ρ = ρ̂T+1. Then we can get the conclusion.
C.1 GRADIENT DESCENTS
Proof of Theorem 4.1. With the definition of smoothness in Definition 3.3 and Eq. (1), we have
f(θt+1)− f(θt) ≤ −ηt∇>t (∇t +Gσtνt/N) + 1
2 Mη2t ‖∇t +Gσtνt/N‖
2
= −ηt(1− 1
2 Mηt) ‖∇t‖2 − (1−Mηt)ηt∇>t Gσtνt/N +
1 2 Mη2t ‖Gσtνt/N‖ 2
≤ −2µηt(1− 1
2 Mηt)(f(θt)− f(θ∗))− (1−Mηt)ηt∇>t Gσtνt/N
+ 1
2 Mη2t ‖Gσtνt/N‖ 2 .
where the last inequality is due to the Polyak-Lojasiewicz condition. Taking expectation on both sides, we can obtain
E[f(θt+1)]− E[f(θt)] ≤ −2µηt(1− M
2 ηt)(E[f(θt)]− f(θ∗)) +
M
2 (ηtGσt/N)
2E ‖νt‖2
which can be reformulated by substacting f(θ∗) on both sides and re-arranged as E[f(θt+1)]− f(θ∗) ≤ ( 1− 2µηt(1− M
2 ηt)
) (E[f(θt)]− f(θ∗)) + M
2 (ηtGσt/N)
2D
Recursively using the inequality, we can get
E[f(θT+1)]− f(θ∗) ≤ T∏ t=1 ( 1− 2µηt(1− M 2 ηt) ) (E[f(θ1)]− f(θ∗))
+ MD
2 T∑ t=1 T∏ i=t+1 ( 1− 2µηi(1− M 2 ηi) ) (ηtGσt/N) 2.
Let ηt ≡ 1/M . Then the above inequality can be simplified as
E[f(θT+1)]− f(θ∗) ≤ γT (E[f(θ1)]− f(θ∗)) +R T∑ t=1 γT−t MD 2R ( ηtG N )2 σ2t
= γT (E[f(θ1)]− f(θ∗)) +R T∑ t=1 γT−tασ2t (E[f(θ1)]− f(θ∗))
= ( γT +R ∑T t=1 qtσ 2 t ) (f(θ1)− f(θ∗))
Proof of Theorem 4.2. The minimizer of the upper bound of Eq. (3) can be written as
T ∗ = arg min T γT + ακ(1− γT )T (21)
where we substitute σ2 = T/R in the second line. To find the convex minimization problem, we need to vanishing its gradient which involves an equation like TγT = c for some real constant c. However, the solution is Wk(c) for some integer k where W is Lambert W function which does not have a simple analytical form. Instead, because γT > 0, we can minimize a surrogate upper bound as following
T ∗ = arg min T
γT + ακT = 1
ln(1/γ) ln
( ln(1/γ)
κα
) , if κα+ ln γ < 0 (22)
where we use the surrogate upper bound in the second line and utilize γ = 1 − 1κ . However, the minimizer of the surrogate objective is not optimal for the original objective. When κ is large, the term, −ακγTT , cannot be neglected as we expect. On the other hands, T suffers from explosion if κ→∞ and meanwhile 1/γ →+ 1. The tendency is counterintuitive since a small T should be taken for sharp losses. To fix the issue, we change the form of T ∗ as
T ∗ = 1
ln(1/γ) ln
( 1 + ln(1/γ)
α
) , (23)
which gradually converges to 0 as κ→∞. Now we substitute Eq. (23) into the original objective function, Eq. (21), to get
ERUBuniform = 1
1 + ln(1/γ)/α
[ 1 + κ ln ( 1 + ln(1/γ)
α
)] . (24)
Notice that
ln(1/γ) = ln(κ/(κ− 1)) = ln(1 + 1/(κ− 1)) ≤ 1 κ− 1 ≤ 1 cκ
because κ ≥ 11−c > 1 for some constant c ∈ (0, 1). In addition,
ln(1/γ) = − ln(1− 1/κ) ≥ 1/κ.
Now, we can get the upper bound of Eq. (24) as
ERUBuniform ≤ κ κ+ 1/α
[ 1 + κ ln ( 1 + 1
cκα )] ≤ c1 κ
κ+ 1/α κ
[ ln ( 1 + 1
κα
) + ln( 1
c )) ] ≤ c1c2 κ2
κ+ 1/α ln
( 1 + 1
κα ) for some constants c1, c2 and large enough 1α . Also, we can get the lower bound
ERUBuniform ≥ cκ cκ+ 1/α
[ 1 + κ ln ( 1 + 1
κα
)] ≥ c κ 2
κ+ 1/α ln
( 1 + 1
κα
) .
where we use the condition c ∈ (0, 1). Thus, ERUBuniform = Θ ( κ2 κ+1/α ln ( 1 + 1κα )) .
Proof of Lemma 4.1. By ∑T t=1 σ
−2 = R and Cauchy-Schwarz inequality, we can derive the achievable lower bound as
R ∑ t qtσ 2 t = ∑ t 1 σ2t ∑ t qtσ 2 t ≥ ( T∑ t=1 √ qt )2
where the inequality becomes equality if and only if s/σ2t = qtσ 2 t , i.e., σt = (s/qt) 1/4, for some positive constant s. The equality ∑T t=1 σ −2 t = R immediately suggests √ s = 1R ∑T t=1 √ qt. Thus, we get the σt.
Notice
T T∑ t=1 qt − (∑T t=1 √ qt )2 = T 2 1 T T∑ t=1 ( √ qt − 1 T T∑ i=1 √ qi )2 = T 2 Var[qt] (25)
where the variance is w.r.t. t.
Proof of Theorem 4.3. The upper bound of Eq. (3) can be written as
ERUBdyn = γT + ∑T
t=1 γT−tαRσ2
= γT + α (∑T t=1 √ γT−t )2 = γT + α ( 1− γT/2
1−√γ )2 where we make use of Lemma 4.1. Then, the minimizer of the ERUB is
T ∗ = arg min T γT + α
( 1− γT/2
1−√γ )2 = 2 logγ ( α
α+ (1−√γ)2
) . (26)
We can substitute Eq. (26) into ERUBdyn to get
ERUBdynmin =
( α
α+ (1−√γ)2
)2 + α ( 1
1−√γ
)2( 1− α
α+ (1−√γ)2 )2 = ( α(1−√γ)−2
α(1−√γ)−2 + 1
)2 +
α(1−√γ)−2( α(1−√γ)−2 + 1 )2 = α(1−√γ)−2
α(1−√γ)−2 + 1
Notice that ( 1−√γ )−2 = κ2 + κ2 − κ + 2κ √ κ(κ− 1) = κ(2κ − 1 + 2 √ κ(κ− 1)) and it is bounded by
κ(2κ− 1 + 2 √ κ(κ− 1)) ≤ 4κ2,
κ(2κ− 1 + 2 √ κ(κ− 1)) ≥ κ(2κ− (3κ− 2) + 2 √ (κ− 1)(κ− 1)) = κ(−κ+ 2 + 2κ− 2) = κ2. Therefore, κ ≤ ( 1−√γ
)−1 ≤ 2κ, with which we can derive ERUBdynmin ≤ 4 κ2α
κ2α+ 1 ,
ERUBdynmin ≥ κ2α 4κ2α+ 1 ≥ 1 4 κ2α κ2α+ 1 .
Thus, ERUBdynmin = Θ ( κ2α κ2α+1 ) .
C.2 GRADIENT DESCENTS WITH MOMENTUM
Proof of Theorem 4.4. Without loss of generality, we absorb the Cσt/N into the variance of νt such that νt ∼ N (0, Cσ 2 t N I) and gt ← ∇t + νt. Define bt = 1− β t.
By smoothness and Eq. (1), we have
f(θt+1)− f(θt) ≤ ∇>t (θt+1 − θt) + 1
2 M ‖θt+1 − θt‖2
= −ηt b2t bt∇>t vt+1 + 1 2 M η2t b2t ‖vt+1‖2
= ηt b2t
( ‖bt∇t − vt+1‖2 − ‖bt∇t‖2 − ‖vt+1‖2 ) + 1
2 M η2t b2t ‖vt+1‖2
= ηt b2t ‖bt∇t − vt+1‖2︸ ︷︷ ︸
U1(t)
−ηt ‖∇t‖2 − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2 , (27)
where only the U1(t) is non-negative. Specifically, U1(t) describes the difference between current gradient and the average. We can expand vt+1 to get an upper bound:
U1(t) = ‖bt∇t − vt+1‖2 = ∥∥∥(1− β)∑t
i=1 βt−i∇t − (1− β) ∑t i=1 βt−igi ∥∥∥2 = (1− β)2 ∥∥∥∑t i=1 βt−i(∇t − gi) ∥∥∥2
= (1− β)2 ∥∥∥∑t
i=1 βt−i(∇t −∇i) + ∑t i=1 βt−i(∇i − gi) ∥∥∥2
≤ 2(1− β)2 [∥∥∥∑t
i=1 βt−i(∇t −∇i) ∥∥∥2 + ∥∥∥∑t i=1 βt−i(∇i − gi) ∥∥∥2]
≤ 2(1− β) bt∑ti=1 βt−i ‖∇t −∇i‖2︸ ︷︷ ︸ U2(t) (gradient variance) +(1− β) ∥∥∥∑t i=1 βt−iνi ∥∥∥2︸ ︷︷ ︸ noise variance
where we use ‖x+ y‖2 ≤ (‖x‖+ ‖y‖)2 ≤ 2(‖x‖2 + ‖y‖2). The last inequality can be proved by Cauchy-Schwartz inequality for each coordinate.
We plug the U1(t) into Eq. (27) and use the PL condition to get
f(θt+1)− f(θt) ≤ ηt b2t U1(t)− ηt ‖∇t‖2 − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2
≤ −ηt ‖∇t‖2 + ηt b2t
2(1− β) [ btU2(t) + (1− β) ∥∥∥∑t i=1 βt−iνi ∥∥∥2] − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2
≤ −2µηt(f(θt)− f(θ∗)) + 2(1− β)ηt
bt U2(t) + 2(1− β)2ηt b2t ∥∥∥∑t i=1 βt−iνi ∥∥∥2 − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2 .
Rearranging terms and taking expectation to show
E[f(θt+1)]− f(θ∗) ≤ γ(E[f(θt)]− f(θ∗)) + 2(1− β)2ηt
b2t
∑t i=1 βt−iE ‖νi‖2
+ 2(1− β)ηt
bt E[U2(t)]− ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2
= γ(E[f(θt)]− f(θ∗)) + 2(1− β)2ηt
b2t
∑t i=1 β2(t−i) C2Dσ2t N2
+ 2(1− β)ηt
bt E[U2(t)]− ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2
where γ = 1− η0/κ = 1− 2µηt. The recursive inequality implies
E[f(θT+1)]− f(θ∗) ≤ γT (f(θ1)− f(θ∗)) + T∑ t=1 γT−t 2(1− β)2ηt b2t ∑t i=1 β2(t−i) C2Dσ2t N2
+ T∑ t=1 γT−t 2(1− β)ηt bt E[U2(t)]− T∑ t=1 γT−t ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2
= ( γT + 2η0αR ∑T t=1 γT−t (1− β)2
b2t
∑t i=1
β2(t−i)σ2t︸ ︷︷ ︸ U3
) (f(θ1)− f(θ∗))
+ T∑ t=1 γT−t 2(1− β)ηt bt E[U2(t)]− T∑ t=1 γT−t ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2︸ ︷︷ ︸
U4(t)
.
where we utilize α = DC 2
2MN2R 1 f(θ1)−f(θ∗) and ηt = η0 2M .
By Lemma B.5, we have T∑ t=1 γT−t 2(1− β)ηt bt U2(t) ≤
η30βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−i ‖vi+1‖2 .
Thus, by 1bt ≥ 1,
U4(t) ≤ η30βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−iE ‖vi+1‖2 − η0 2M (1− η0 4 ) T∑ t=1 γT−tE ‖vt+ | 1. What is the focus of the paper regarding private gradient descent?
2. What are the strengths of the proposed approach, particularly in its theoretical analysis?
3. What are the concerns regarding the paper's contributions?
4. Do you have any questions about relaxing the paper's assumptions to increase its value and interest?
5. How do the results change when using a stochastic gradient?
6. Are there any minor comments or suggestions for improving the paper? | Review | Review
Summary: The paper investigates the idea that a dynamic privacy schedule of decreasing noise can help private gradient descent. The main contribution is a theoretical analysis of a dynamic privacy schedule which helps reduce the utility upper bound of private gradient descent (with or without momentum).
Strengths: 1) Private gradient descent is an important optimization technique in differentially private ML, therefore, understanding its behavior is broadly interesting. 2) The paper investigates both the vanilla private gradient descent as well as a version with momentum.
Concerns: 1) While there is some improvement with the dynamic schedule vs. fixed schedule, the improvement does not seem to significant (ref. Table 1). 2) Results hold only under PL condition (which is a more general condition than strong convexity) and smoothness.
The results look correct. My current scores are because I see limited value/interest in these results.
Questions:
Maybe relaxing the assumptions might be a way to strengthen the paper. Is that possible?
What is the advantage of using the momentum method in terms of the utility upper bound (ref. Table 1). It looks like (within constant factors) vanilla GD with dynamic schedule works as well as using the momentum.
Do these results hold with a stochastic gradient?
Minor comment: hat{T} not defined in Table 1 |
ICLR | Title
On Dynamic Noise Influence in Differential Private Learning
Abstract
Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. Private Gradient Descent (PGD) is a commonly used private learning framework, which noises gradients based on the Differential Privacy protocol. Recent studies show that dynamic privacy schedules of decreasing noise magnitudes can improve loss at the final iteration, and yet theoretical understandings of the effectiveness of such schedules and their connections to optimization algorithms remain limited. In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions. We first present a dynamic noise schedule minimizing the utility upper bound of PGD, and show how the noise influence from each optimization step collectively impacts utility of the final model. Our study also reveals how impacts from dynamic noise influence change when momentum is used. We empirically show the connection exists for general non-convex losses, and the influence is greatly impacted by the loss curvature.
1 INTRODUCTION
In the era of big data, privacy protection in machine learning systems is becoming a crucial topic as increasing personal data involved in training models (Dwork et al., 2020) and the presence of malicious attackers (Shokri et al., 2017; Fredrikson et al., 2015). In response to the growing demand, differential-private (DP) machine learning (Dwork et al., 2006) provides a computational framework for privacy protection and has been widely studied in various settings, including both convex and non-convex optimization (Wang et al., 2017; 2019; Jain et al., 2019).
One widely used procedure for privacy-preserving learning is the (Differentially) Private Gradient Descent (PGD) (Bassily et al., 2014; Abadi et al., 2016). A typical gradient descent procedure updates its model by the gradients of losses evaluated on the training data. When the data is sensitive, the gradients should be privatized to prevent excess privacy leakage. The PGD privatizes a gradient by adding controlled noise. As such, the models from PGD is expected to have a lower utility as compared to those from unprotected algorithms. In the cases where strict privacy control is exercised, or equivalently, a tight privacy budget, accumulating effects from highly-noised gradients may lead to unacceptable model performance. It is thus critical to design effective privatization procedures for PGD to maintain a great balance between utility and privacy.
Recent years witnessed a promising privatization direction that studies how to dynamically adjust the privacy-protecting noise during the learning process, i.e., dynamic privacy schedules, to boost utility under a specific privacy budget. One example is (Lee & Kifer, 2018), which reduced the noise magnitude when the loss does not decrease, due to the observation that the gradients become very small when approaching convergence, and a static noise scale will overwhelm these gradients. Another example is (Yu et al., 2019), which periodically decreased the magnitude following a predefined strategy, e.g., exponential decaying or step decaying. Both approaches confirmed the empirically advantages of decreasing noise magnitudes. Intuitively, the dynamic mechanism may coordinate with certain properties of the learning task, e.g., training data and loss surface. Yet there is no theoretical analysis available and two important questions remain unanswered: 1) What is the form of utility-preferred noise schedules? 2) When and to what extent such schedules improve utility?
To answer these questions, in this paper we develop a principled approach to construct dynamic schedules and quantify their utility bounds in different learning algorithms. Our contributions
are summarized as follows. 1) For the class of loss functions satisfying the Polyak-Lojasiewicz condition (Polyak, 1963), we show that a dynamic schedule improving the utility upper bound is shaped by the influence of per-iteration noise on the final loss. As the influence is tightly connected to the loss curvature, the advantage of using dynamic schedule depends on the loss function consequently. 2) Beyond gradient descent, our results show the gradient methods with momentum implicitly introduce a dynamic schedule and result in an improved utility bound. 3) We empirically validate our results on convex and non-convex (no need to satisfy the PL condition) loss functions. Our results suggest that the preferred dynamic schedule admits the exponentially decaying form, and works better when learning with high-curvature loss functions. Moreover, dynamic schedules give more utility under stricter privacy conditions (e.g., smaller sample size and less privacy budget).
2 RELATED WORK
Differentially Private Learning. Differential privacy (DP) characterizes the chance of an algorithm output (e.g., a learned model) to leak private information in its training data when the output distribution is known. Since outputs of many learning algorithms have undetermined distributions, the probability of their privacy leakages is hard to measure. A common approach to tackle this issue is to inject randomness with known probability distribution to privatize the learning procedures. Classical methods include output perturbation (Chaudhuri et al., 2011), objective perturbation (Chaudhuri et al., 2011) and gradient perturbation (Abadi et al., 2016; Bassily et al., 2014; Wu et al., 2017). Among these approaches, the Private Gradient Descent (PGD) has attracted extensive attention in recent years because it can be flexibly integrated with variants of gradient-based iteration methods, e.g., stochastic gradient descent, momentum methods (Qian, 1999), and Adam (Kingma & Ba, 2014), for both convex and non-convex problems.
Dynamic Policies for Privacy Protection. Wang et al. (2017) studied the empirical risk minimization using dynamic variation reduction of perturbed gradients. They showed that the utility upper bound can be achieved by gradient methods under uniform noise parameters. Instead of enhancing the gradients, Yu et al. (2019); Lee & Kifer (2018) showed the benefits of using a dynamic schedule of privacy parameters or equivalently noise scales. In addition, adaptive sensitivity control (Pichapati et al., 2019; Thakkar et al., 2019) and dynamic batch sizes (Feldman et al., 2020) are also demonstrated to improves the convergence.
Utility Upper Bounds. A utility upper bound is a critical metric for privacy schedules that characterizes the maximum utility that a schedule can deliver in theory. Wang et al. (2017) is the first to prove the utility bound under the PL condition. In this paper, we improve the upper bound by a more accurate estimation of the dynamic influence of step noise. Remarkably, by introducing a dynamic schedule, we further boost the sample-efficiency of the upper bound. With a similar intuition, Feldman et al. (2020) proposed to gradually increase the batch size, which reduces the dependence
on sample size accordingly. Recently, Zhou et al. proved the utility bound by using the momentum of gradients (Polyak, 1964; Kingma & Ba, 2014). Table 1 summarizes the upper bounds of methods studied in this paper (in the last block of rows) and results from state-of-the-art algorithms based on private gradients. Our work shows that considering the dynamic influence can lead to a tighter bound.
3 PRIVATE GRADIENT DESCENT
Notations. We consider a learning task by empirical risk minimization (ERM) f(θ) = 1 N ∑N n=1 f(θ;xn) on a private dataset {xn}Nn=1 and θ ∈ RD. The gradient methods are defined as
θt+1 = θt − ηt∇t, where ∇t = ∇f(θt) = 1N ∑ n∇f(θt;xn) denotes the non-private gradient at iteration t, ηt is the step learning rate. ∇(n)t = ∇f(θt;xn) denotes the gradient on a sample xn. Ic denotes the indicator function that returns 1 if the condition c holds, otherwise 0.
Assumptions. (1) In this paper, we assume f(θ) is continuous and differentiable. Many commonly used loss functions satisfy this assumption, e.g., the logistic function. (2) For a learning task, only finite amount of privacy cost is allowed where the maximum cost is called privacy budget and denoted as R. (3) Generally, we assume that loss functions f(θ;x) (sample-wise loss) are G-Lipschitz continuous and f(θ) (the empirical loss) is M -smooth. Definition 3.1 (G-Lipschitz continuity). A function f(·) is G-Lipschitz continuous if, for G > 0 and all x, y in the domain of f(·), f(·) satisfies ‖f(y)− f(x)‖ ≤ G‖y − x‖2. . Definition 3.2 (m-strongly convexity). A function f(·) is m-strongly convex if f(y) ≥ f(x) + ∇f(x)T (y − x) + m2 ‖y − x‖
2, for some m > 0 and all x, y in the domain of f(·). Definition 3.3 (M -smoothness). A function is M -smooth w.r.t. l2 norm if f(y) ≤ f(x) + ∇f(x)T (y − x) + M2 ‖y − x‖ 2, for some constant M > 0 and all x, y in the domain of f(·).
For a private algorithmM(d) which maps a dataset d to some output, the privacy cost is measured by the bound of the output difference on the adjacent datasets. Adjacent datasets are defined to be datasets that only differ in one sample. In this paper, we use the zero-Concentrated Differential Privacy (zCDP, see Definition 3.4) as the privacy measurement, because it provides the simplicity and possibility of adaptively composing privacy costs at each iteration. Various privacy metrics are discussed or reviewed in (Desfontaines & Pejó, 2019). A notable example is Moment Accoutant (MA) (Abadi et al., 2016), which adopts similar principle for composing privacy costs while is less tight for a smaller privacy budget. We note that alternative metrics can be adapted to our study without major impacts to the analysis. Definition 3.4 (ρ-zCDP (Bun & Steinke, 2016)). Let ρ > 0. A randomized algorithmM : Dn → R satisfies ρ-zCDP if, for all adjacent datasets d, d′ ∈ Dn, Dα(M(d)‖M(d′)) ≤ ρα, ∀α ∈ (1,∞) where Dα(·‖·) denotes the Rényi divergence (Rényi, 1961) of order α.
zCDP provides a linear composition of privacy costs of sub-route algorithms. When the input vector is privatized by injecting Gaussian noise of N (0, σ2t I) for the t-th iteration, the composed privacy cost is proportional to ∑ t ρt where the step cost is ρt =
1 σ2t . For simplicity, we absorb the constant coefficient into the (residual) privacy budget R. The formal theorems for the privacy cost computation of composition and Gaussian noising is included in Lemmas B.1 and B.2.
Generally, we define the Private Gradient Descent (PGD) method as iterations for t = 1 . . . T : θt+1 = θt − ηtφt = θt − ηt(∇t + σtGνt/N), (1)
where φt = gt is the gradient privatized from ∇t as shown in Algorithm 1, G/N is the bound of sensitivity of the gathered gradient excluding one sample gradient, and νt ∼ N (0, I) is a vector element-wisely subject to Gaussian distribution. We use σt to denote the noise scale at step t and use σ to collectively represents the schedule (σ1, . . . , σT ) if not confusing. When the Lipschitz constant is unknown, we can control the upper bound by scaling the gradient if it is over some constant. The scaling operation is often called clipping in literatures since it clips the gradient norm at a threshold. After the gradient is noised, we apply a modification, φ(·), to enhance its utility. In this paper, we consider two types of φ(·):
φ(mt, gt) = gt (GD), φ(mt, gt) = [β(1− βt−1)mt + (1− β)gt]/(1− βt) (Momentum)
We now show that the PGD using Algorithm 1 guarantees a privacy cost less than R:
Algorithm 1 Privatizing Gradients
Input: Raw gradients [∇(1)t , . . . ,∇ (n) t ] (n = N by default), vt, residual privacy budget Rt assuming the full budget is R and R1 = R.
1: ρt ← 1/σ2t ,∇t ← 1n ∑n i=1∇ (i) t . Budget request 2: if ρt < Rt then 3: Rt+1 ← Rt − ρt 4: gt ← ∇t +Gσtνt/N , νt ∼ N (0, I) . Privacy noise 5: mt+1 ← φ(mt, gt) or g1 if t = 1 6: return ηtmt+1, Rt+1 . Utility projection 7: else 8: Terminate
Theorem 3.1. Suppose f(θ;x) is G-Lipschitz continuous and the PGD algorithm with privatized gradients defined by Algorithm 1, stops at step T . The PGD algorithm outputs θT and satisfies ρ-zCDP where ρ ≤ 12R.
Note that Theorem 3.1 allows σt to be different throughout iterations. Next we present a principled approach for deriving dynamic schedules optimized for the final loss f(θT ).
4 DYNAMIC POLICIES BY MINIMIZING UTILITY UPPER BOUNDS
To characterize the utility of the PGD, we adopt the Expected Excess Risk (EER), which notion is widely used for analyzing the convergence of random algorithms, e.g., (Bassily et al., 2014; Wang et al., 2017). Due to the presence of the noise and the limitation of learning iterations, optimization using private gradients is expected to reach a point with a higher loss (i.e., excess risk) as compared to the optimal solution without private protection. Define θ∗ = arg minθ f(θ), after Algorithm 1 is iterated for T times in total, the EER gives the expected utility degradation:
EER = Eν [f(θT+1)]− f(θ∗).
Due to the variety of loss function and complexity of recursive iterations, an exact EER with noise is intractable for most functions. Instead, we study the worst case scenario, i.e., the upper bound of the EER, and our goal is to minimize the upper bound. For consistency, we call the upper bound of EER divided by the initial error as ERUB. Since the analytical form of EER is either intractable or complicated due to the recursive iterations of noise, studying the ERUB is a convenient and tractable alternative. The upper bound often has convenient functional forms which are (1) sufficiently simple, such that we can directly minimize it, and (2) closely related to the landscape of the objective depending on both the training dataset and the loss function. As a consequence, it is also used in previous PGD literature (Pichapati et al., 2019; Wang et al., 2017) for choosing proper parameters. Moreover, we let ERUBmin be the achievable optimal upper bound by a specific choice of parameters, e.g., the σ and T .
In this paper, we consider the class of loss functions satisfying the Polyak-Lojasiewicz (PL) condition which bounds losses by corresponding gradient norms. It is more general than the m-strongly convexity. If f is differentiable and M -smooth, then m-strongly convexity implies the PL condition.
Definition 4.1 (Polyak-Lojasiewicz condition (Polyak, 1963)). For f(θ), there exists µ > 0 and for every θ, ‖∇f(θ)‖2 ≥ 2µ(f(θ)− f(θ∗)).
The PL condition helps us to reveal how the influence of step noise propagates to the final excess error, i.e., EER. Though the assumption was also used previously in Wang et al. (2017); Zhou et al. (2020), neither did they discuss the propagated influence of noise. In the following sections, we will show how the influence can tighten the upper bound in gradient descent and its momentum variant.
4.1 GRADIENT DESCENT METHODS
For the brevity of our discussion, we first define the following constants: 1
α ,
2RMN2
DG2 (f(θ1)− f(θ∗)), κ ,
M µ , and γ , 1− 1 κ , (2)
which satisfy κ ≥ 1 and γ ∈ [0, 1). Note that κ is the condition number of f(·) if f(·) is strongly convex. κ tends to be large if the function is sensitive to small differences in inputs, and 1/α tends to be large if more samples are provided and with a less strict privacy budget. The convergence of PGD under the PL condition has been studied for private (Wang et al., 2017) and non-private (Karimi et al., 2016; Nesterov & Polyak, 2006; Reddi et al., 2016) ERM. Below we extend the bound in (Wang et al., 2017) by considering dynamic influence of noise and relax σt to be dynamic: Theorem 4.1. Let α, κ and γ be defined in Eq. (2), and ηt = 1M . Suppose f(θ;xi) is G-Lipschitz and f(θ) is M -smooth satisfying the Polyak-Lojasiewicz condition. For PGD, the following holds:
ERUB = γT +R ∑T
t=1 qtσ
2 t , where qt , γ T−tα. (3)
In Eq. (3), the step noise magnitude σ2t has an exponential influence, qt, on the EER. The dynamic characteristic of the influence is the key to prove a tighter bound. Plus, on the presence of the dynamic influence, it is natural to choose a dynamic σ2t . When relaxing qt to a static 1, a static σ 2 t was studied by Wang et al. They proved a bound which is nearly optimal except a ln2N factor. To get the optimal bound, in the following sections, we look for the σ and T that minimize the upper bound.
4.1.1 UNIFORM SCHEDULE
The uniform setting of σt has been previously studied in Wang et al. (2017). Here, we show that the bound can be further tightened by considering the dynamic influence of iterations and a proper T . Theorem 4.2. Suppose conditions in Theorem 4.1 are satisfied. When σ2t = T/R, let α, γ and κ be defined in Eq. (2) and let T be:
T = ⌈ O ( κ ln ( 1 + 1
κα
))⌉ . (4)
Meanwhile, if κ ≥ 11−c > 1, 1/α > 1/α0 for some constant c ∈ (0, 1) and α0 > 0, the corresponding bound is:
ERUBuniformmin = Θ
( κ2
κ+ 1/α ln
( 1 + 1
κα
)) . (5)
Sketch of proof. The key of proof is to find a proper T to minimize ERUB = E = γT + ∑T
t=1 γT−tαRσ2 = γT + αT
1− γT
1− γ = γT + ακ(1− γT )T
where we use σt = √ T/R. Vanishing its gradient is to solve γT ln γ+ακ(1−γT )−ακTγT ln γ = 0, which however is intractable. In (Wang et al., 2017), T is chosen to be O(ln(1/α)) and ERUB is relaxed as γT + ακT 2. The approximation results in a less tight bound as O(α(1 + κ ln2(1/α))) which explodes as κ→∞. We observe that for a super sharp loss function, i.e., a large κ, any minor perturbation may result in tremendously fluctuating loss values. In this case, not-stepping-forward will be a good choice. Thus, we choose T = 1ln(1/γ) ln ( 1 + ln(1/γ)α ) ≤ O ( κ ln ( 1 + 1κα )) which converges to 0 as κ → +∞. The full proof is deferred to the appendix.
4.1.2 DYNAMIC SCHEDULE
A dynamic schedule can improve the upper bound delivered by the uniform schedule. First, we observe that the excess risk in Eq. (3) is upper bounded by two terms: the first term characterizes the error due to the finite iterations of gradient descents; the second term, a weighted sum, comes from error propagated from noise at each iteration. Now we show for any {qt|qt > 0, t = 1, . . . , T} (not limited to the qt defined in Eq. (3)), there is a unique σt minimizing the weighted sum:
Lemma 4.1 (Dynamic schedule). Suppose σt satisfy ∑T t=1 σ
−2 = R. Given a positive sequence {qt}, the following equation holds:
min σ R ∑T t=1 qtσ 2 t = (∑T t=1 √ qt )2 , when σ2t = 1 R ∑T i=1 √ qi qt . (6)
Remarkably, the difference between the minimum and T ∑T t=1 qt (uniform σt) monotonically in-
creases by the variance of √ qt w.r.t. t.
We see that the dynamics in σt come from the non-uniform nature of the weight qt. Since qt presents the impact of the σt on the final error, we denote it as influence. Given the dynamic schedule in Eq. (6), it is of our interest to which extent the ERUB can be improved. First, we present Theorem 4.3 to show the optimal T and ERUB. Theorem 4.3. Suppose conditions in Theorem 4.1 are satisfied. Let α, κ and γ be defined in Eq. (2). When ηt = 1M , σt (based on Eqs. (3) and (6)) and the T minimizing ERUB are, i.e.,
σ2t = 1
R √ (1/γ)T − 1 1−√γ √ γt, T = ⌈( 2κ ln ( 1 + 1 κα ))⌉ . (7)
Meanwhile, when κ ≥ 1 and 1/α ≥ 1/α0 for some positive constant α0, the minimal bound is:
ERUBdynamicmin = Θ
( κ2
κ2 + 1/α
) . (8)
4.1.3 DISCUSSION
In Theorems 4.2 and 4.3, we present the tightest bounds for functions satisfying the PL condition, to our best knowledge. We further analyze the advantages of our bounds from two aspects: sample efficiency and robustness to sharp losses.
Sample efficiency. Since dataset cannot be infinitely large, it is critical to know how accurate the model can be trained privately with a limited number of samples. Formally, it is of interest to study when κ is fixed and N is large enough such that α 1. Then we have the upper bound in Eq. (5) as
ERUBuniformmin ≤ O ( κ2α ln ( 1
κα
)) ≤ Õ ( DG2 ln(N)
MN2R
) , (9)
where we ignore κ and other logarithmic constants with Õ as done in Wang et al. (2017). As a result, we get a bound very similar to (Wang et al., 2017), except that R is replaced by RMA = 2/ ln(1/δ) using Moment Accountant. In comparison, based on Lemma B.3, R = 2ρ = 2 + 4 ln(1/δ) + 4 √ ln(1/δ)( + ln(1/δ) if θT satisfies ρ-zCDP. Because ln(1/δ) > 1, it is easy to see R = RzCDP > RMA when ≤ 2 ln(1/δ). As compared to the one reported in (Wang et al., 2017), our bound saved a factor of lnN and thus is require less sample to achieve the same accuracy. Remarkably, the saving is due to the maintaining of the influence terms as shown in the proof of Theorem 4.2.
Using the dynamic schedule, we have ERUBdynamicmin ≤ O(α) = O ( DG2 MN2R ) , which saved another lnN factor in comparison to the one using the uniform schedule Eq. (9). As shown in Table 1, such advantage maintains when comparing with other baselines.
Robustness. Besides sample efficiency, we are also interested in robustness of the convergence under the presence of privacy noise. Because of the privacy noise, the convergence of private gradient descent will be unable to reach an ideal spot. Specifically, when the samples are noisy or have noisy labels, the loss curvature may be sharp. The sharpness also implies lower smoothness, i.e., a small M or has a very small PL parameter. Thus, gradients may change tremendously at some steps especially in the presence of privacy noise. As illustrated in the left figure, the highly-curved loss function (the green curve) results in mean higher final loss (the red dashed line) than the flatten curve (purple and blue lines). Such changes have more critical impact when only a less number of iterations can be executed due to the privacy constraint. Assume α is some constant while κ 1/α, we immediately get:
ERUBuniformmin = Θ
( κ ln ( 1 + 1
κα
)) = Θ ( 1
α
) ≤ O ( MN2R
DG2
) , ERUBdynamicmin = Θ(1).
Both are robust, but the dynamic schedule has a smaller factor since 1/α could be a large number. In addition, the factor implies that when more samples are used, the dynamic schedule is robuster.
4.2 GRADIENT DESCENT METHODS WITH MOMENTUM
Section 4.1 shows that the step noise has an exponentially increasing influence on the final loss, and therefore a decreasing noise magnitude improves the utility upper bound by a lnN factor. However, the proper schedule can be hard to find when the curvature information, e.g., κ, is absent. A parameterized method that less depends on the curvature information is preferred. On the other hand, long-term iterations will result in forgetting of the initial iterations, since accumulated noise overwhelmed the propagated information from the beginning. This effect will reduce the efficiency of the recursive learning frameworks.
Alternative to GD, the momentum method can mitigate the two issues. It was originally proposed to stabilize the gradient estimation (Polyak, 1964). In this section, we show that momentum (agnostic about the curvature) can flatten the dynamic influence and improve the utility upper bound. Previously, Pichapati et al. used the momentum as an estimation of gradient mean, without discussions of
convergence improvements. Zhou et al. gave a bound for the Adam with DP. However, the derivation is based on gradient norm, which results in a looser bound (see Table 1).
The momentum method stabilizes gradients by moving average history coordinate values and thus greatly reduces the variance. The φ(mt, gt) can be rewritten as:
mt+1 = φ(mt, gt) = vt+1
1− βt , vt+1 = βvt + (1− β)gt = (1− β) ∑t i=1 βt−igt, v1 = 0, (10)
where β ∈ [0, 1]. Note vt+1 is a biased estimation of the gradient expectation while mt+1 is unbiased. Theorem 4.4 (Convergence under PL condition). Suppose f(θ;xi) is G-Lipschitz, and f(θ) is M - smooth and satisfies the Polyak-Lojasiewicz condition. Assume β 6= γ and β ∈ (0, 1). Let ηt = η02M and η0 ≤ 8 (√ 1 + 64βγ(γ − β)−2(1− β)−3 + 1 )−1 . Then the following holds:
EER ≤ ( γT + 2Rη0α U3(σ, T )︸ ︷︷ ︸
noise varinace
) (f(θ1)− f(θ∗))− ζ
η0 2M ∑T t=1
γT−tE ‖vt+1‖2︸ ︷︷ ︸ momentum effect
(11)
where γ = 1− η0 κ , ζ = 1− 1 β(1− β)3 η20 − 1 4 η0 ≥ 0, (12)
U3 = ∑T
t=1 γT−t
(1− β)2 (1− βt)2 ∑t i=1 β2(t−i)σ2i . (13)
The upper bound includes three parts that influence the bound differently: (1) Convergence. The convergence term is mainly determined by η0 and κ. η0 should be in (0, κ) such that the upper bound can converge. A large η0 will be preferred to speed up convergence if it does not make the rest two terms worse. (2) Noise Variance. The second term compressed in U3 is the effect of the averaged noise, ∑t i=1 β
2(t−i)σ2i . One difference introduced by the momentum is the factor (1− β)/(1− βt) which is less than γt at the beginning and converges to a non-zero constant 1− β. Therefore, in U3, γT−t(1− β)/(1− βt) will be constantly less than γT meanwhile. Furthermore, when t > T̂ , the moving average ∑t i=1 β
2(t−i)σ2i smooths the influence of each σt. In Appendix D, we will see that the influence dynamics is less steep than that of GD. (3) Momentum Effect. The momentum effect term can improve the upper bound when η0 is small. For example, when β = 0.9 and γ = 0.99, then η0 ≤ 0.98/M which is a rational value. Following the analysis, when M is large which means the gradient norms will significantly fluctuate, the momentum term may take the lead. Adjusting the noise scale in this case may be less useful for improving utility.
To give an insight on the effect of dynamic schedule, we provide the following utility bounds.
Theorem 4.5 (Uniform schedule). Suppose the assumptions in Theorem 4.4 are satisfied. Let σ2t = T/R, and let:
T̂ = max t s.t. γt−1 ≥ 1− β 1− βt , T =
⌈ O ( κ η0 ln ( 1 + η0 κα ))⌉ .
Given some positive constant c and α0 > 0 with 1/α > 1/α0, the following inequality holds: ERUBmin ≤ O ( κ2
κ+ η0/α
[ IT≤T̂ + γ T̂−1 ln ( 1 + η0 κα ) IT>T̂ ]) .
Theorem 4.6 (Dynamic schedule). Suppose the assumptions in Theorem 4.4 are satisfied. Let α′ = 2η0αγ(1−γβ2) , β < γ and T̂ = max t s.t. γ t−1 ≥ 1−β1−βt . Use the following schedule:
σ2t = 1
R ∑T i=1 √ qi qt , T dyn = ⌈ O ( 2κ η0 ln ( 1 + η0 κα ))⌉ ,
where qt = c1γT+tIT≤T̂ + γ T̂−1c2γ T−tIT>T̂ for some positive constants c1 and c2. The following inequality holds:
ERUB ≤ γT + 2η0α ∑T
t=1 Rqtσ
2 t , ERUBmin ≤ O
( κα
κα+ η0
( κα
κα+ η0 IT≤T̂ + IT>T̂
)) .
Discussion. Theoretically, the dynamic schedule is more influential in vanilla gradient descent methods than the momentum variant. The result is mainly attributed to the averaging operation. The moving averaging, (1 − β) ∑t i=1 β
t−igi/(1 − βt), increase the influence of the under-presented initial steps and decrease the one of the over-sensitive last steps. Counterintuitively, the preferred dynamic schedule should be increasing since qt decreases when t ≤ T̂ .
4.3 PRIVATE STOCHASTIC GRADIENT DESCENT (NEW SECTION ON REBUTTAL)
Though PGD provides a guarantee both for utility and privacy, computing gradients of the whole dataset is impractical for large-scale problems. For this sake, studying the convergence of Private Stochastic Gradient Descent (PSGD) is meaningful. The Algorithm 1 can be easily extended to PSGD by subsampling n gradients where the batch size n N . According to (Yu et al., 2019), when privacy is measured by zCDP, there are two ways to account for the privacy cost of PSGD depending on the batch-sampling method: sub-sampling with or without replacement. In this paper, we focus on the random subsampling with replacement since it is widely used in deep learning in literature, e.g., (Abadi et al., 2016; Feldman et al., 2020). Accordingly, we replace N in the definition of α by n because the term is from the sensitivity of batch data (see Eq. (1)). For clarity, we assume that T is the number of iterations rather than epochs and that ∇̃t is mean stochastic gradient. When a batch of data are randomly sampled, the privacy cost of one iteration is cp2/σt where c is some constant, p = n/N is the sample rate, and 1/σ2t is the full-batch privacy cost. Details of the sub-sampling theorems are referred to the Theorem 3 of (Yu et al., 2019) and their empirical setting. Threfore, we can replace the privacy constraint ∑ t p 2/σ2t = R by ∑ t 1/σ 2 t = R
′ where R′ = R/p2 = N 2
n2 R. Remarkably, we omit the constant c because it will not affect the results regarding uniform or dynamic schedules. Notice N2R in the α is replaced by n2R′ = N2R. Thus, the form of α is not changed which provides convenience for the following derivations.
Now we study the utility bound of PSGD. To quantify the randomness of batch sampling, we define a random vector ξt with E[ξt] = 0 and E ‖ξt‖2 ≤ D such that ∇̃t ≤ ∇t + σgξt/n for some positive constant σg. Because ξt has similar property to the privacy noise νt, we can easily extend the PGD bounds to PSGD bounds by following theories. Theorem 4.7 (Utility bounds of PSGD). Let α, κ and γ be defined in Eq. (2), and ηt = 1M . Suppose f(θ;xi) is G-Lipschitz and f(θ) is M -smooth satisfying the Polyak-Lojasiewicz condition. For PSGD, when batch size satisfies n = max{N √ R, 1}, the following holds:
ERUB = γT + αgσ 2 g +R
′ ∑T
t=1 qtσ
2 t , where qt , γ T−tα, ∑ t 1/σ2t = R ′. (14)
where αg = D2µN2R(f(θ1)−f(θ∗)) .
Theorem 4.8 (PSGD with momentum). Let αg = D2µN2R(f(θ1)−f(θ∗)) . Suppose assumptions in Theorem 4.4 holds. When batch size satisfies n = max{N √ R, 1}, the U3(σ, T ) has to be replaced by Ũ3 = U g 3 + U3, with αR
′Ug3 ≤ αgσ2g (15) when PSGD is used.
As shown above, the utility bound of PSGD differs from the PGD merely by αgσ2g . Note αg = O( DN2R ) which fits the order of dynamic-schedule bounds. In addition, α and other variables are not changed. Hence, the conclusions w.r.t. the dynamic/uniform schedules maintain the same.
5 EXPERIMENTS
We empirically validate the properties of privacy schedules and their connections to learning algorithms. In this section, we briefly review the schedule behavior on quadratic losses under varying data sensitivity. Details of experimental setups and empirical results are available in Appendix D.
We first show the estimated influence of step noise qt (by retraining the private learning algorithms, ref. Appendix D) in Fig. 2 Left. We see the trends of influence are approximately in an exponential form of t. This obvervation motivates the use of exponential decay schedule in practice.
We then show the trends on the variance of influence (dashed lines with the right axis) and relative final losses (solid lines with the left axis) in the Middle Pane, where uni denotes the uniform schedule baseline, exp is an exponential schedule, dyn denotes the dynamic schedule minimizing the ERUB. The influences increases steeply when the data scale is large and therefore have a large variance. Meanwhile, dynamic schedules show improvements of the final loss when the variance is large. It reveals the connection between the influence and the dynamic advantage (refer to Lemma 4.1).
We lastly evaluate the impacts from momentum in the Right Pane, using a Deep Neural Network (DNN) with 2 layers and 100 hidden units. Because of time costs of training deep networks, we do not estimate the influence by retraining and then compute schedules. Instead, we grid-search for the schedule hyper-parameters to find the best one. We see that influence modeled by an exponential function (expinfl) has comparable performance of the influence modeled by linear combination of two reverse exponential functions (momexpinfl). The latter only shows advantage in the setting that data scale is 25 and the number of iteration is only 100, which is expected by our analysis Theorems 4.5 and 4.6. The inherent reason is that the dynamic schedule is more effective when T is larger.
6 CONCLUSION
When a privacy budget is provided for a certain learning task, one has to carefully schedule the privacy usage through the learning process. Uniformly scheduling the budget has been widely used in literature whereas increasing evidence suggests that dynamically schedules could empirically outperform the uniform one. This paper provided a principled analysis on the problem of optimal budget allocation and connected the advantages of dynamic schedules to both the loss structure and the learning behavior. We further validated our results through empirical studies.
A COMPARISON OF ALGORITHMS
We present Table 2 as an sumpplementary to the Table 1. Asymptotic upper bounds are achieved when sample size N approaches infinity. Both R and R ,δ with R ,δ < R are the privacy budgets of corresponding algorithms. Specifically, R ,δ = 2/ ln(1/δ) < R when the private algorithm is ( , δ)-DP with ≤ 2 ln(1/δ). PGD+Adv. Adv denotes the Advanced Composition method (Bassily et al., 2014). The method assumes that loss function is 1-strongly convex which implies the PL condition and optimized variable is in a convex set of diameter 1 w.r.t. l2 norm.
PGD+MA. MA denotes the Moment Accoutant (Abadi et al., 2016) which improve the composed privacy bound versus the Advanced Composition. The improvement on privacy bound lead to a enhanced utility bound, as a result.
PGD+Adv+BBImp. The dynamic method assumes that the loss is 1-strongly convex and data comes in stream with n ≤ N samples at each round. Their utility upper bound is achieved at some probability p with any positive c.
Adam+MA. The authors prove a convergence bound for the gradient norms which is extended to loss bound by using PL condition. They also presents the results for AdaGrad and GD which are basically of the same upper bound. Out theorems improve their bound by using the recursive derivation based on the PL condition, while their bound is a simple application of the condition on the gradient norm bound.
GD, Non-Private. This method does not inject noise into gradients but limit the number of iterations. With the bound, we can see that our utility bound are optimal with dynamic schedule.
GD+zCDP. We discussed the static and dynamic schedule for the gradient descent method where the dynamic noise influence is the key to tighten the bound.
Momemtum+zCDP. Different from the GD+zCDP, momentum methods will have two phase of utility upper bound. When T is small than some positive constant T̂ , the bound is as tight as the non-private one. Afterwards, the momentum has a bound degraded as the GD bound.
A.1 COMAPRISON OF GENERALIZATION BOUNDS
In addition to the empirical risk bounds in Table 2, in this section we study the true risk bounds, or generalization error bounds. True risk bounds characterize how well the learnt model can generalize to unseen samples subject to the inherent data distribution. By leveraging the generic learning-theory tools, we extend our results to the True Excess Risk (TER) for strongly convex functions as follows. For a model θ, its TER is defined as follows:
TER , Ex∼X [E[f(θ;x)]]−minθ̂ Ex∼X [f(θ̂;x)], where the second expectation is over the randomness of generating θ (e.g., the noise and stochastic batches). Assume a dataset d consist of N samples drawn i.i.d. from the distribution X . Two approaches could be used to extend the empirical bounds to the true excess risk: One is proposed by Shalev-Shwartz et al. (2009) where the true excess risk of PGD can be bounded in high probability. For example, Bassily et al. (2014) achieved a ln
2N N bound with N 2 iterations. Alternatively, instead of relying on the probabilistic bound, Bassily et al. (2019) used the uniform stability to give a tighter bound. Later, Feldman et al. (2020) improve the efficiency of gradient computation to achieve a similar bound. Both approaches introduce an additive term to the empirical bounds. In this section, we adopt both approaches to investigate the two types of resulting true risk bounds.
(1) True Risk in High Probability. First, we consider the high-probability true risk bound. Based on Section 5.4 from (Shalev-Shwartz et al., 2009) (restated in Theorem A.1), we can relate the EER to the TER. Theorem A.1. Let f(θ;x) be G-Lipschitz, and f(θ) be µ-strong convex loss function given any x ∈ X . With probability at least 1− p over the randomness of sampling the data set d, the following inequality holds:
TER(θ) ≤
√ 2G2
µN
√ f(θ)− f(θ∗) + 4G 2
pµN , (16)
where θ∗ = arg minθ f(θ).
To apply the Eq. (16), we need to extend EER, the expectation bound, to a high-probability bound. Following (Bassily et al., 2014) (Section D), we repeate the PGD with privacy budgetR/k for k times. Note, the output of all repetitions is still of R budget. When k = 1, let the EER of the algorithm be denoted as F (R). Then the EER of one execution of the k repetitions is F (R/k) where privacy is accounted by zCDP. When k = log2(1/p) for p ∈ [0, 1], by Markov’s inequality, there exists one repetition whose EER is F (R/ log2(1/p)) with probability at least 1 − 1/2k = 1 − p. Combined with Eq. (16), we use the bounds of uniform schedule and dynamic schedules in Section 4.1.3 to obtain:
TERuniform ≤ Õ
( G2
µN
(√ D ln(N) ln(1/p)
NR +
4
p
)) , (17)
TERdynamic ≤ Õ
( G2
µN
(√ D ln(1/p)
NR +
4
p
)) , (18)
where we again ignore the κ and other constants. Similarly, we can extend the momentum methods.
(2) True Risk by Unfirom Stability. Following Bassily et al. (2019), we use the uniform stability (defined in Definition A.1) to extend the empirical bounds. We restate the related definition and theorems as follows. Definition A.1 (Uniform stability). Let s > 0. A randomized algorithm M : DN → Θ is suniformly stable w.r.t. the loss function f if for any neighor datasets d and d′, we have: supx∈X E[f(M(d);x)− f(M(d′);x)] ≤ s, where the expectation is over the internal randomness ofM. Theorem A.2 (See, e.g., (Shalev-Shwartz & Ben-David, 2014)). Suppose M : DN → Θ is a s-uniformly stable algorithm w.r.t. the loss function f . LetD be any distribution from over data space and let d ∼ DN . The following holds true.
Ed∼DN [E[f(M(d);D)− f(M(d); d)]] ≤ s, where the second expectation is over the internal randomness ofM. f(M(d);D) and f(M(d); d) represent the true loss and the empirical loss, respecitvely. Theorem A.3 (Uniform stability of PGD from (Bassily et al., 2019)). Suppose η < 2/M for M smooth, G-Lipschitz f(θ;x). Then PGD is s-uniformly stable with s = G2Tη/N .
Combining Theorems A.2 and A.3, we obtain the following:
TER ≤ EER +G2 ηT N .
Because EER in this paper compresses a γT or similar exponential terms, unlike (Bassily et al., 2019), we cannot directly minimize the TER upper bound w.r.t. T and η in the presence of a polynomial form of γT and T . Therefore, we still use T = O(ln N
2R D ) and η for minimizing EER. Note that
G2 ηT N ≤ O( G
2 MN ln N2R D ) ≤ O
( G2
M ) where we assume N D and use lnN ≤ N . Because the term O ( G2/M ) is constant and independent from dimension, we follow (Bassily et al., 2019) to drop the term when comparing the bounds. After dropping the additive term, it is obvious to see that the advantage of dynamic schedules still maintains since TER ≤ EER. A similar extension can be derived for (Wang et al., 2017). We summarize the results and compare them to prior works in Table 3 where we include an additional method: Snowball Stochastic Gradient Descent (SSGD). SSGD dynamically schedule the batch size to achieve an optimal convergence rate in linear time.
Discussion. By using uniform stability, we successfully transfer the advantage of our dynamic schedules from empirical bounds to true risk bounds. The inherent reason is that our bounds only need lnN iterations to reach the preferred final loss. With uniform stability, the logarithmic T reduce the gap caused by transferring. Compared to the (Feldman et al., 2020; Bassily et al., 2019), our method has remarkably improved efficiency in T from N or N2 to ln(N). That implies fewer iterations are required for converging to the same generalization error.
B PRELIMINARIES
B.1 PRIVACY
Lemma B.1 (Composition & Post-processing). Let two mechanisms be M : Dn → Y and M ′ : Dn × Y → Z . Suppose M satisfies (ρ1, a)-zCDP and M ′(·, y) satisfies (ρ2, a)-zCDP for ∀y ∈ Y . Then, mechanism M ′′ : Dn → Z (defined by M ′′(x) = M ′(x,M(x))) satisfies (ρ1 + ρ2)-zCDP. Definition B.1 (Sensitivity). The sensitivity of a gradient query∇t to the dataset {xi}Ni=1 is
∆2(∇t) = max n ∥∥∥∥ 1N ∑Nj=1,j 6=n∇(j)t − 1N ∑Nj=1∇(j)t ∥∥∥∥ 2 = 1 N max n ∥∥∥∇(n)t ∥∥∥ 2
(19)
where∇(n)t denotes the gradient of the n-th sample. Lemma B.2 (Gaussian mechanism (Bun & Steinke, 2016)). Let f : Dn → Z have sensitivity ∆. Define a randomized algorithm M : Dn → Z by M(x)← f(x) +N (0,∆2σ2I). Then M satisfies 1 2σ2 -zCDP. Lemma B.3 ((Bun & Steinke, 2016)). If M is a mechanism satisfying ρ-zCDP, then M is (ρ + 2 √ ρ ln(1/δ), δ)-DP for any δ > 0. By solving ρ+ 2 √ ρ ln(1/δ) = , we can get ρ = + 2 ln(1/δ) + 2 √ ln(1/δ)( + ln(1/δ).
B.2 AUXILIARY LEMMAS
Lemma B.4. If maxn ‖xn‖2 = 1 and 1 N ∑ n xn = 0, then the gradient sensitivity of the squared loss will be
∆2(∇) = max i
1
N
√ 2f(θ;xi) ‖xi‖2 ≤ 1
2 (DM ‖θ‖2 + 1),
where ΘM is the set of all possible parameters θt generated by the learning algorithmM.
Proof. According to the definition of sensitivity in Eq. (19), we have
∆2(∇) = max i ∥∥∥∇(i)∥∥∥ 2 = max n 1 n ∥∥∥A(i)θ − xi∥∥∥ 2
where we use i denotes the index of sample in the dataset. Here, we assume it is constant 1. We may get ∥∥∥A(i)θ − xi∥∥∥2 2 = ∥∥xi(x>i θ − 1)∥∥22 = (x>i θ − 1)2 ‖xi‖22 = 2f(θ;xi) ‖xi‖22
where f(θ;xi) = 12 (x > i θ − 1)2. Thus,
∆2(∇) = max i
1
N
√ 2f(θ;xi) ‖xi‖2
Since ‖xn‖2 ≤ 1 and 1 N ∑N n=1 xn = 0,
f(θ) = 1
2N N∑ n=1 [(x>n θ) 2 − 2x>n θ + 1] ≤ 1 2N N∑ n=1 [(‖xn‖ ‖θ‖)2 + 1] ≤ 1 2 (DM ‖θ‖2 + 1)
Lemma B.5. Assume assumptions in Theorem 4.4 are satisfied. Given variables defined in Theorem 4.4, the following inequality holds true:
T∑ t=1 γT−t 2(1− β)ηt bt ∑t i=1 βt−i ‖∇t −∇i‖2 ≤ η30βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−i ‖vi+1‖2 .
Proof. We first handle the inner summation. By smoothness, the inequality ‖∇f(x)−∇f(y)‖ ≤ M ‖x− y‖ holds true. Thus,
∑t i=1 βt−i ‖∇t −∇i‖2 ≤M2 ∑t i=1 βt−i ‖θt − θi‖2
= M2 ∑t−1
k=0 βk ‖θt − θt−k‖2
= M2 ∑t−1 k=0 βk ∥∥∥∥∑t−1i=t−k ηivi+1/bi ∥∥∥∥2 ≤M2 ∑t−1 k=0 βk (∑t−1 j=t−k η2j /b 2 j )(∑t−1 i=t−k ‖vi+1‖2 )
where the last inequality is by Cauchy-Schwartz inequality. Because 1bt = 1 1−βt ≤ 1 1−β and ηt = η0 2M ,
∑t i=1 βt−i ‖∇t −∇i‖2 ≤ η20 4(1− β)2 ∑t−1 k=0 βkk ∑t−1 i=t−k ‖vi+1‖2
= η20 4(1− β)2 ∑t−1 k=0 βkk ∑t−1 i=1 ‖vi+1‖2 I(i ≥ t− k)
= η20 4(1− β)2 ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=0 βkkI(k ≥ t− i)
= η20 4(1− β)2 ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i βkk (20)
where I(·) is the indicating function which output 1 if the condition holds true, otherwise 0. Denote the left-hand-side of the conclusion as LHS. We plug Eq. (20) into LHS to get
LHS ≤ T∑ t=1 γT−t 1 bt η30 4M(1− β) ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i βkk
≤ η 3 0 4M(1− β)2 T∑ t=1 γT−t ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i βkk
where we relax the upper bound by 1bt = 1 1−βt ≤ 1 1−β . Using Lemma B.6 can directly lead to the conclusion:
LHS ≤ η 3 0βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−i ‖vi+1‖2 .
Lemma B.6. Given variables defined in Theorem 4.4, the following inequality holds true:
T∑ t=1 γT−t ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i kβk ≤ 2βγ (γ − β)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2 .
Proof. We first derive the summation:
U1(t, i) , ∑t−1
k=t−i βkk = ∑t−1 k=t−i ∑k j=1 βk
= ∑t−1
k=t−i ∑t−1 j=1 βkI(j ≤ k)
= ∑t−1
j=1 ∑t−1 k=max(t−i,j) βk
= ∑t−1
j=1
βmax(t−i,j) − βt
1− β
= 1
1− β
( (t− i)βt−i + β t−i+1 − βt
1− β − β − β t 1− β ) = 1
1− β
( (t− i)βt−i + β
t−i+1 − β 1− β
)
Now, we substitute U1(t, i) into LHS and replace t− i by j, i.e., t = j + i, to get
LHS = T∑ t=1 γT−t t−1∑ i=1 ‖vi+1‖2 1 1− β ( (t− i)βt−i + β t−i+1 − β 1− β )
= T−1∑ i=1 ‖vi+1‖2 T∑ t=i+1 γT−t 1 1− β ( (t− i)βt−i + β t−i+1 − β 1− β )
= T−1∑ i=1 ‖vi+1‖2 T−i∑ j=1 γT−(j+i) 1 1− β ( jβj + βj+1 − β 1− β )
= T−1∑ i=1 γT−i ‖vi+1‖2 T−i∑ j=1 γ−j 1 1− β ( jβj + βj+1 − β 1− β )
≤ 1 1− β T−1∑ i=1 γT−i ‖vi+1‖2 T−i∑ j=1
( j ( β
γ
)j + β
1− β
( β
γ
)j)
Let a = β/γ, we show
T−i∑ j=1 jaj = T−i∑ j=1 j∑ o=1 aj
= T−i∑ o=1 T−i∑ j=o aj
= T−i∑ o=1 ( ao − aT−i+1 1− a ) = a− aT−i+1
(1− a)2 − (T − i)a
T−i+1
1− a
≤ a (1− a)2 .
Thus,
LHS ≤ 1 1− β T−1∑ i=1
γT−i ‖vi+1‖2 a
(1− a)2 +
β
1− β T−i∑ j=1 aj ≤ 1
1− β T−1∑ i=1 γT−i ‖vi+1‖2 (
a
(1− a)2 +
β 1− β a 1− a
)
≤ a (1− a)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2
Because γ < 1, β < a = β/γ and
a
(1− a)2 +
β 1− β a 1− a ≤ 2a (1− a)2 .
Therefore,
LHS ≤ 2a (1− a)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2 = 2βγ (γ − β)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2
Lemma B.7. Suppose γ ∈ (0, 1) and β ∈ (0, 1). Define
T̂ = max t s.t. γt−1 ≥ 1− β 1− βt .
If t ≤ T̂ , 1−β1−βt ≤ γ t−1 for t = 1, . . . , T . If t > T̂ , 1−β1−βt < γ T̂−1.
Proof. Define h(t) = γt−1(1− βt) whose derivatives are
h′(t) = γt−1(1− βt) ln γ + γt−1(−βt) lnβ = γt−1 [ ln γ − βt(ln γ + lnβ) ] = γt−1 [ 1− βt(1 + logγ β) ] ln γ.
Simple calculation shows 1− βt(1 + logγ β) ∣∣ t=0
= − logγ β < 0 and limt→+∞ 1 − βt(1 + logγ β) = 1. When t = − logβ(1 + logγ β) denoted as t0, 1 − βt(1 + logγ β) = 0. Because 1 − βt(1 + logγ β) is monotonically increasing by t and γt−1 ln γ is negative, h′(t) ≥ 0 if t ≤ t0. Otherwise, h′(t) < 0. Therefore, h(t) is a concave function. Because h(1) = 1− β and h(T̂ ) = γT̂−1(1 − βT̂ ) ≥ 1 − β > 0, h(t) ≥ 1 − β for t = 1, . . . , T̂ . Thus, for all t ∈ [1, T̂ ], we have 1−β1−βt ≤ γ t−1.
For t > T̂ , because 1−β1−βt monotonically increases by t, we have 1−β 1−βt < 1−β 1−βT̂ ≤ γT̂−1.
C PROOFS
Proof of Theorem 3.1. Because all sample gradient are G-Lipschitz continuous, the sensitivity of the averaged gradient is upper bounded by G/N . Based on Lemma B.2, the privacy cost of gt is 12σ2t 1.
Here, we make the output of each iteration a tuple of (θt+1, vt=1). For the 1st iteration, because θ1 does not embrace private information by random initialization, the mapping,[
v2 θ2
] = [ g1
θ1 − η1g1
] ,
1For brevity, when we say the privacy cost of some value, e.g., gradient, we actually refer to the cost of mechanism that output the value.
is ρ̂1-zCDP where ρ̂1 = 12σ2t .
Suppose the output of the t-th iteration, (θt, vt), is ρ̂t-zCDP. At each iteration, we have the following mapping (θt, vt)→ (θt+1, vt+1) defined as[
vt+1 θt+1
] = [ φ(vt, gt)
θt − ηtφ(vt, gt)
] .
Thus, the output tuple (θt+1, vt+1) is (ρ̂t + 12σ2t )-zCDP by Lemma B.1.
Thus, the recursion implies that (θT+1, vT+1) has privacy cost as
ρ̂T+1 = ρ̂T + 1
2σ2T = · · · = T∑ t=1 1 2σ2t = 1 2 T∑ t=1 ρt ≤ 1 2 (R−RT ) ≤ 1 2 R.
Let ρ = ρ̂T+1. Then we can get the conclusion.
C.1 GRADIENT DESCENTS
Proof of Theorem 4.1. With the definition of smoothness in Definition 3.3 and Eq. (1), we have
f(θt+1)− f(θt) ≤ −ηt∇>t (∇t +Gσtνt/N) + 1
2 Mη2t ‖∇t +Gσtνt/N‖
2
= −ηt(1− 1
2 Mηt) ‖∇t‖2 − (1−Mηt)ηt∇>t Gσtνt/N +
1 2 Mη2t ‖Gσtνt/N‖ 2
≤ −2µηt(1− 1
2 Mηt)(f(θt)− f(θ∗))− (1−Mηt)ηt∇>t Gσtνt/N
+ 1
2 Mη2t ‖Gσtνt/N‖ 2 .
where the last inequality is due to the Polyak-Lojasiewicz condition. Taking expectation on both sides, we can obtain
E[f(θt+1)]− E[f(θt)] ≤ −2µηt(1− M
2 ηt)(E[f(θt)]− f(θ∗)) +
M
2 (ηtGσt/N)
2E ‖νt‖2
which can be reformulated by substacting f(θ∗) on both sides and re-arranged as E[f(θt+1)]− f(θ∗) ≤ ( 1− 2µηt(1− M
2 ηt)
) (E[f(θt)]− f(θ∗)) + M
2 (ηtGσt/N)
2D
Recursively using the inequality, we can get
E[f(θT+1)]− f(θ∗) ≤ T∏ t=1 ( 1− 2µηt(1− M 2 ηt) ) (E[f(θ1)]− f(θ∗))
+ MD
2 T∑ t=1 T∏ i=t+1 ( 1− 2µηi(1− M 2 ηi) ) (ηtGσt/N) 2.
Let ηt ≡ 1/M . Then the above inequality can be simplified as
E[f(θT+1)]− f(θ∗) ≤ γT (E[f(θ1)]− f(θ∗)) +R T∑ t=1 γT−t MD 2R ( ηtG N )2 σ2t
= γT (E[f(θ1)]− f(θ∗)) +R T∑ t=1 γT−tασ2t (E[f(θ1)]− f(θ∗))
= ( γT +R ∑T t=1 qtσ 2 t ) (f(θ1)− f(θ∗))
Proof of Theorem 4.2. The minimizer of the upper bound of Eq. (3) can be written as
T ∗ = arg min T γT + ακ(1− γT )T (21)
where we substitute σ2 = T/R in the second line. To find the convex minimization problem, we need to vanishing its gradient which involves an equation like TγT = c for some real constant c. However, the solution is Wk(c) for some integer k where W is Lambert W function which does not have a simple analytical form. Instead, because γT > 0, we can minimize a surrogate upper bound as following
T ∗ = arg min T
γT + ακT = 1
ln(1/γ) ln
( ln(1/γ)
κα
) , if κα+ ln γ < 0 (22)
where we use the surrogate upper bound in the second line and utilize γ = 1 − 1κ . However, the minimizer of the surrogate objective is not optimal for the original objective. When κ is large, the term, −ακγTT , cannot be neglected as we expect. On the other hands, T suffers from explosion if κ→∞ and meanwhile 1/γ →+ 1. The tendency is counterintuitive since a small T should be taken for sharp losses. To fix the issue, we change the form of T ∗ as
T ∗ = 1
ln(1/γ) ln
( 1 + ln(1/γ)
α
) , (23)
which gradually converges to 0 as κ→∞. Now we substitute Eq. (23) into the original objective function, Eq. (21), to get
ERUBuniform = 1
1 + ln(1/γ)/α
[ 1 + κ ln ( 1 + ln(1/γ)
α
)] . (24)
Notice that
ln(1/γ) = ln(κ/(κ− 1)) = ln(1 + 1/(κ− 1)) ≤ 1 κ− 1 ≤ 1 cκ
because κ ≥ 11−c > 1 for some constant c ∈ (0, 1). In addition,
ln(1/γ) = − ln(1− 1/κ) ≥ 1/κ.
Now, we can get the upper bound of Eq. (24) as
ERUBuniform ≤ κ κ+ 1/α
[ 1 + κ ln ( 1 + 1
cκα )] ≤ c1 κ
κ+ 1/α κ
[ ln ( 1 + 1
κα
) + ln( 1
c )) ] ≤ c1c2 κ2
κ+ 1/α ln
( 1 + 1
κα ) for some constants c1, c2 and large enough 1α . Also, we can get the lower bound
ERUBuniform ≥ cκ cκ+ 1/α
[ 1 + κ ln ( 1 + 1
κα
)] ≥ c κ 2
κ+ 1/α ln
( 1 + 1
κα
) .
where we use the condition c ∈ (0, 1). Thus, ERUBuniform = Θ ( κ2 κ+1/α ln ( 1 + 1κα )) .
Proof of Lemma 4.1. By ∑T t=1 σ
−2 = R and Cauchy-Schwarz inequality, we can derive the achievable lower bound as
R ∑ t qtσ 2 t = ∑ t 1 σ2t ∑ t qtσ 2 t ≥ ( T∑ t=1 √ qt )2
where the inequality becomes equality if and only if s/σ2t = qtσ 2 t , i.e., σt = (s/qt) 1/4, for some positive constant s. The equality ∑T t=1 σ −2 t = R immediately suggests √ s = 1R ∑T t=1 √ qt. Thus, we get the σt.
Notice
T T∑ t=1 qt − (∑T t=1 √ qt )2 = T 2 1 T T∑ t=1 ( √ qt − 1 T T∑ i=1 √ qi )2 = T 2 Var[qt] (25)
where the variance is w.r.t. t.
Proof of Theorem 4.3. The upper bound of Eq. (3) can be written as
ERUBdyn = γT + ∑T
t=1 γT−tαRσ2
= γT + α (∑T t=1 √ γT−t )2 = γT + α ( 1− γT/2
1−√γ )2 where we make use of Lemma 4.1. Then, the minimizer of the ERUB is
T ∗ = arg min T γT + α
( 1− γT/2
1−√γ )2 = 2 logγ ( α
α+ (1−√γ)2
) . (26)
We can substitute Eq. (26) into ERUBdyn to get
ERUBdynmin =
( α
α+ (1−√γ)2
)2 + α ( 1
1−√γ
)2( 1− α
α+ (1−√γ)2 )2 = ( α(1−√γ)−2
α(1−√γ)−2 + 1
)2 +
α(1−√γ)−2( α(1−√γ)−2 + 1 )2 = α(1−√γ)−2
α(1−√γ)−2 + 1
Notice that ( 1−√γ )−2 = κ2 + κ2 − κ + 2κ √ κ(κ− 1) = κ(2κ − 1 + 2 √ κ(κ− 1)) and it is bounded by
κ(2κ− 1 + 2 √ κ(κ− 1)) ≤ 4κ2,
κ(2κ− 1 + 2 √ κ(κ− 1)) ≥ κ(2κ− (3κ− 2) + 2 √ (κ− 1)(κ− 1)) = κ(−κ+ 2 + 2κ− 2) = κ2. Therefore, κ ≤ ( 1−√γ
)−1 ≤ 2κ, with which we can derive ERUBdynmin ≤ 4 κ2α
κ2α+ 1 ,
ERUBdynmin ≥ κ2α 4κ2α+ 1 ≥ 1 4 κ2α κ2α+ 1 .
Thus, ERUBdynmin = Θ ( κ2α κ2α+1 ) .
C.2 GRADIENT DESCENTS WITH MOMENTUM
Proof of Theorem 4.4. Without loss of generality, we absorb the Cσt/N into the variance of νt such that νt ∼ N (0, Cσ 2 t N I) and gt ← ∇t + νt. Define bt = 1− β t.
By smoothness and Eq. (1), we have
f(θt+1)− f(θt) ≤ ∇>t (θt+1 − θt) + 1
2 M ‖θt+1 − θt‖2
= −ηt b2t bt∇>t vt+1 + 1 2 M η2t b2t ‖vt+1‖2
= ηt b2t
( ‖bt∇t − vt+1‖2 − ‖bt∇t‖2 − ‖vt+1‖2 ) + 1
2 M η2t b2t ‖vt+1‖2
= ηt b2t ‖bt∇t − vt+1‖2︸ ︷︷ ︸
U1(t)
−ηt ‖∇t‖2 − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2 , (27)
where only the U1(t) is non-negative. Specifically, U1(t) describes the difference between current gradient and the average. We can expand vt+1 to get an upper bound:
U1(t) = ‖bt∇t − vt+1‖2 = ∥∥∥(1− β)∑t
i=1 βt−i∇t − (1− β) ∑t i=1 βt−igi ∥∥∥2 = (1− β)2 ∥∥∥∑t i=1 βt−i(∇t − gi) ∥∥∥2
= (1− β)2 ∥∥∥∑t
i=1 βt−i(∇t −∇i) + ∑t i=1 βt−i(∇i − gi) ∥∥∥2
≤ 2(1− β)2 [∥∥∥∑t
i=1 βt−i(∇t −∇i) ∥∥∥2 + ∥∥∥∑t i=1 βt−i(∇i − gi) ∥∥∥2]
≤ 2(1− β) bt∑ti=1 βt−i ‖∇t −∇i‖2︸ ︷︷ ︸ U2(t) (gradient variance) +(1− β) ∥∥∥∑t i=1 βt−iνi ∥∥∥2︸ ︷︷ ︸ noise variance
where we use ‖x+ y‖2 ≤ (‖x‖+ ‖y‖)2 ≤ 2(‖x‖2 + ‖y‖2). The last inequality can be proved by Cauchy-Schwartz inequality for each coordinate.
We plug the U1(t) into Eq. (27) and use the PL condition to get
f(θt+1)− f(θt) ≤ ηt b2t U1(t)− ηt ‖∇t‖2 − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2
≤ −ηt ‖∇t‖2 + ηt b2t
2(1− β) [ btU2(t) + (1− β) ∥∥∥∑t i=1 βt−iνi ∥∥∥2] − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2
≤ −2µηt(f(θt)− f(θ∗)) + 2(1− β)ηt
bt U2(t) + 2(1− β)2ηt b2t ∥∥∥∑t i=1 βt−iνi ∥∥∥2 − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2 .
Rearranging terms and taking expectation to show
E[f(θt+1)]− f(θ∗) ≤ γ(E[f(θt)]− f(θ∗)) + 2(1− β)2ηt
b2t
∑t i=1 βt−iE ‖νi‖2
+ 2(1− β)ηt
bt E[U2(t)]− ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2
= γ(E[f(θt)]− f(θ∗)) + 2(1− β)2ηt
b2t
∑t i=1 β2(t−i) C2Dσ2t N2
+ 2(1− β)ηt
bt E[U2(t)]− ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2
where γ = 1− η0/κ = 1− 2µηt. The recursive inequality implies
E[f(θT+1)]− f(θ∗) ≤ γT (f(θ1)− f(θ∗)) + T∑ t=1 γT−t 2(1− β)2ηt b2t ∑t i=1 β2(t−i) C2Dσ2t N2
+ T∑ t=1 γT−t 2(1− β)ηt bt E[U2(t)]− T∑ t=1 γT−t ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2
= ( γT + 2η0αR ∑T t=1 γT−t (1− β)2
b2t
∑t i=1
β2(t−i)σ2t︸ ︷︷ ︸ U3
) (f(θ1)− f(θ∗))
+ T∑ t=1 γT−t 2(1− β)ηt bt E[U2(t)]− T∑ t=1 γT−t ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2︸ ︷︷ ︸
U4(t)
.
where we utilize α = DC 2
2MN2R 1 f(θ1)−f(θ∗) and ηt = η0 2M .
By Lemma B.5, we have T∑ t=1 γT−t 2(1− β)ηt bt U2(t) ≤
η30βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−i ‖vi+1‖2 .
Thus, by 1bt ≥ 1,
U4(t) ≤ η30βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−iE ‖vi+1‖2 − η0 2M (1− η0 4 ) T∑ t=1 γT−tE ‖vt+ | 1. What is the focus of the paper regarding private gradient descent and its difference from prior works?
2. What are the concerns regarding the practicality of the proposed approach?
3. What are the differences in the comparison between the paper and prior works, particularly regarding the excess loss?
4. What are the typos and unclear aspects in the proof?
5. How does the reviewer assess the strength of the result in relation to prior works? | Review | Review
The paper studies private gradient descent when the noise added to each of the iteration is dynamically scheduled. Prior to this work, the work of Zhou et al. tries to achieve the same for DP-SGD and they analyze their algorithm for many variants of adaptive gradient descent based method. The difference with Zhou et al. is that they do not gradient norm and the generalization property of DP. As a result, the authors claim that Zhou et al. achieves suboptimal utility guarantee.
I will keep my reviews limited to the current submission. My first objection to the paper is that gradient descent requires computation of gradients over the entire dataset. As such, every iteration is very costly and not practical at all, given the training data set can be very large.
The paper considers Polyak-Lojasiewicz's condition, a more general form than strongly convex loss function. However, I find some of the comparison misleading. For example, Feldman et al. (STOC 2020) and Zhou et al. show an excess "population loss" of order
O
(
1
/
n
)
. On the other hand, if I understand the paper correctly, the current submission only give excess empirical risk. Further, Feldman et al's two algorithms achieve optimal excess loss while running in nearly "linear" time, which is not the case in the current submission.
There might be some interesting idea in this paper, especially, looking at the dynamic schedule for PPML. However, I fail to see the strength of the result in the view of prior work as the comparison done in the paper is between apples and oranges.
I did not verify the proof very carefully. There are some typos that I found. For example, line following equation (1),
v
t
should be
ν
t
.
α
depends on
f
(
θ
t
)
−
f
(
θ
∗
)
and
T
,
σ
t
depends on
α
. How are we going to set these parameters when we do not know the minimum value of the objective function? |
ICLR | Title
On Dynamic Noise Influence in Differential Private Learning
Abstract
Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. Private Gradient Descent (PGD) is a commonly used private learning framework, which noises gradients based on the Differential Privacy protocol. Recent studies show that dynamic privacy schedules of decreasing noise magnitudes can improve loss at the final iteration, and yet theoretical understandings of the effectiveness of such schedules and their connections to optimization algorithms remain limited. In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions. We first present a dynamic noise schedule minimizing the utility upper bound of PGD, and show how the noise influence from each optimization step collectively impacts utility of the final model. Our study also reveals how impacts from dynamic noise influence change when momentum is used. We empirically show the connection exists for general non-convex losses, and the influence is greatly impacted by the loss curvature.
1 INTRODUCTION
In the era of big data, privacy protection in machine learning systems is becoming a crucial topic as increasing personal data involved in training models (Dwork et al., 2020) and the presence of malicious attackers (Shokri et al., 2017; Fredrikson et al., 2015). In response to the growing demand, differential-private (DP) machine learning (Dwork et al., 2006) provides a computational framework for privacy protection and has been widely studied in various settings, including both convex and non-convex optimization (Wang et al., 2017; 2019; Jain et al., 2019).
One widely used procedure for privacy-preserving learning is the (Differentially) Private Gradient Descent (PGD) (Bassily et al., 2014; Abadi et al., 2016). A typical gradient descent procedure updates its model by the gradients of losses evaluated on the training data. When the data is sensitive, the gradients should be privatized to prevent excess privacy leakage. The PGD privatizes a gradient by adding controlled noise. As such, the models from PGD is expected to have a lower utility as compared to those from unprotected algorithms. In the cases where strict privacy control is exercised, or equivalently, a tight privacy budget, accumulating effects from highly-noised gradients may lead to unacceptable model performance. It is thus critical to design effective privatization procedures for PGD to maintain a great balance between utility and privacy.
Recent years witnessed a promising privatization direction that studies how to dynamically adjust the privacy-protecting noise during the learning process, i.e., dynamic privacy schedules, to boost utility under a specific privacy budget. One example is (Lee & Kifer, 2018), which reduced the noise magnitude when the loss does not decrease, due to the observation that the gradients become very small when approaching convergence, and a static noise scale will overwhelm these gradients. Another example is (Yu et al., 2019), which periodically decreased the magnitude following a predefined strategy, e.g., exponential decaying or step decaying. Both approaches confirmed the empirically advantages of decreasing noise magnitudes. Intuitively, the dynamic mechanism may coordinate with certain properties of the learning task, e.g., training data and loss surface. Yet there is no theoretical analysis available and two important questions remain unanswered: 1) What is the form of utility-preferred noise schedules? 2) When and to what extent such schedules improve utility?
To answer these questions, in this paper we develop a principled approach to construct dynamic schedules and quantify their utility bounds in different learning algorithms. Our contributions
are summarized as follows. 1) For the class of loss functions satisfying the Polyak-Lojasiewicz condition (Polyak, 1963), we show that a dynamic schedule improving the utility upper bound is shaped by the influence of per-iteration noise on the final loss. As the influence is tightly connected to the loss curvature, the advantage of using dynamic schedule depends on the loss function consequently. 2) Beyond gradient descent, our results show the gradient methods with momentum implicitly introduce a dynamic schedule and result in an improved utility bound. 3) We empirically validate our results on convex and non-convex (no need to satisfy the PL condition) loss functions. Our results suggest that the preferred dynamic schedule admits the exponentially decaying form, and works better when learning with high-curvature loss functions. Moreover, dynamic schedules give more utility under stricter privacy conditions (e.g., smaller sample size and less privacy budget).
2 RELATED WORK
Differentially Private Learning. Differential privacy (DP) characterizes the chance of an algorithm output (e.g., a learned model) to leak private information in its training data when the output distribution is known. Since outputs of many learning algorithms have undetermined distributions, the probability of their privacy leakages is hard to measure. A common approach to tackle this issue is to inject randomness with known probability distribution to privatize the learning procedures. Classical methods include output perturbation (Chaudhuri et al., 2011), objective perturbation (Chaudhuri et al., 2011) and gradient perturbation (Abadi et al., 2016; Bassily et al., 2014; Wu et al., 2017). Among these approaches, the Private Gradient Descent (PGD) has attracted extensive attention in recent years because it can be flexibly integrated with variants of gradient-based iteration methods, e.g., stochastic gradient descent, momentum methods (Qian, 1999), and Adam (Kingma & Ba, 2014), for both convex and non-convex problems.
Dynamic Policies for Privacy Protection. Wang et al. (2017) studied the empirical risk minimization using dynamic variation reduction of perturbed gradients. They showed that the utility upper bound can be achieved by gradient methods under uniform noise parameters. Instead of enhancing the gradients, Yu et al. (2019); Lee & Kifer (2018) showed the benefits of using a dynamic schedule of privacy parameters or equivalently noise scales. In addition, adaptive sensitivity control (Pichapati et al., 2019; Thakkar et al., 2019) and dynamic batch sizes (Feldman et al., 2020) are also demonstrated to improves the convergence.
Utility Upper Bounds. A utility upper bound is a critical metric for privacy schedules that characterizes the maximum utility that a schedule can deliver in theory. Wang et al. (2017) is the first to prove the utility bound under the PL condition. In this paper, we improve the upper bound by a more accurate estimation of the dynamic influence of step noise. Remarkably, by introducing a dynamic schedule, we further boost the sample-efficiency of the upper bound. With a similar intuition, Feldman et al. (2020) proposed to gradually increase the batch size, which reduces the dependence
on sample size accordingly. Recently, Zhou et al. proved the utility bound by using the momentum of gradients (Polyak, 1964; Kingma & Ba, 2014). Table 1 summarizes the upper bounds of methods studied in this paper (in the last block of rows) and results from state-of-the-art algorithms based on private gradients. Our work shows that considering the dynamic influence can lead to a tighter bound.
3 PRIVATE GRADIENT DESCENT
Notations. We consider a learning task by empirical risk minimization (ERM) f(θ) = 1 N ∑N n=1 f(θ;xn) on a private dataset {xn}Nn=1 and θ ∈ RD. The gradient methods are defined as
θt+1 = θt − ηt∇t, where ∇t = ∇f(θt) = 1N ∑ n∇f(θt;xn) denotes the non-private gradient at iteration t, ηt is the step learning rate. ∇(n)t = ∇f(θt;xn) denotes the gradient on a sample xn. Ic denotes the indicator function that returns 1 if the condition c holds, otherwise 0.
Assumptions. (1) In this paper, we assume f(θ) is continuous and differentiable. Many commonly used loss functions satisfy this assumption, e.g., the logistic function. (2) For a learning task, only finite amount of privacy cost is allowed where the maximum cost is called privacy budget and denoted as R. (3) Generally, we assume that loss functions f(θ;x) (sample-wise loss) are G-Lipschitz continuous and f(θ) (the empirical loss) is M -smooth. Definition 3.1 (G-Lipschitz continuity). A function f(·) is G-Lipschitz continuous if, for G > 0 and all x, y in the domain of f(·), f(·) satisfies ‖f(y)− f(x)‖ ≤ G‖y − x‖2. . Definition 3.2 (m-strongly convexity). A function f(·) is m-strongly convex if f(y) ≥ f(x) + ∇f(x)T (y − x) + m2 ‖y − x‖
2, for some m > 0 and all x, y in the domain of f(·). Definition 3.3 (M -smoothness). A function is M -smooth w.r.t. l2 norm if f(y) ≤ f(x) + ∇f(x)T (y − x) + M2 ‖y − x‖ 2, for some constant M > 0 and all x, y in the domain of f(·).
For a private algorithmM(d) which maps a dataset d to some output, the privacy cost is measured by the bound of the output difference on the adjacent datasets. Adjacent datasets are defined to be datasets that only differ in one sample. In this paper, we use the zero-Concentrated Differential Privacy (zCDP, see Definition 3.4) as the privacy measurement, because it provides the simplicity and possibility of adaptively composing privacy costs at each iteration. Various privacy metrics are discussed or reviewed in (Desfontaines & Pejó, 2019). A notable example is Moment Accoutant (MA) (Abadi et al., 2016), which adopts similar principle for composing privacy costs while is less tight for a smaller privacy budget. We note that alternative metrics can be adapted to our study without major impacts to the analysis. Definition 3.4 (ρ-zCDP (Bun & Steinke, 2016)). Let ρ > 0. A randomized algorithmM : Dn → R satisfies ρ-zCDP if, for all adjacent datasets d, d′ ∈ Dn, Dα(M(d)‖M(d′)) ≤ ρα, ∀α ∈ (1,∞) where Dα(·‖·) denotes the Rényi divergence (Rényi, 1961) of order α.
zCDP provides a linear composition of privacy costs of sub-route algorithms. When the input vector is privatized by injecting Gaussian noise of N (0, σ2t I) for the t-th iteration, the composed privacy cost is proportional to ∑ t ρt where the step cost is ρt =
1 σ2t . For simplicity, we absorb the constant coefficient into the (residual) privacy budget R. The formal theorems for the privacy cost computation of composition and Gaussian noising is included in Lemmas B.1 and B.2.
Generally, we define the Private Gradient Descent (PGD) method as iterations for t = 1 . . . T : θt+1 = θt − ηtφt = θt − ηt(∇t + σtGνt/N), (1)
where φt = gt is the gradient privatized from ∇t as shown in Algorithm 1, G/N is the bound of sensitivity of the gathered gradient excluding one sample gradient, and νt ∼ N (0, I) is a vector element-wisely subject to Gaussian distribution. We use σt to denote the noise scale at step t and use σ to collectively represents the schedule (σ1, . . . , σT ) if not confusing. When the Lipschitz constant is unknown, we can control the upper bound by scaling the gradient if it is over some constant. The scaling operation is often called clipping in literatures since it clips the gradient norm at a threshold. After the gradient is noised, we apply a modification, φ(·), to enhance its utility. In this paper, we consider two types of φ(·):
φ(mt, gt) = gt (GD), φ(mt, gt) = [β(1− βt−1)mt + (1− β)gt]/(1− βt) (Momentum)
We now show that the PGD using Algorithm 1 guarantees a privacy cost less than R:
Algorithm 1 Privatizing Gradients
Input: Raw gradients [∇(1)t , . . . ,∇ (n) t ] (n = N by default), vt, residual privacy budget Rt assuming the full budget is R and R1 = R.
1: ρt ← 1/σ2t ,∇t ← 1n ∑n i=1∇ (i) t . Budget request 2: if ρt < Rt then 3: Rt+1 ← Rt − ρt 4: gt ← ∇t +Gσtνt/N , νt ∼ N (0, I) . Privacy noise 5: mt+1 ← φ(mt, gt) or g1 if t = 1 6: return ηtmt+1, Rt+1 . Utility projection 7: else 8: Terminate
Theorem 3.1. Suppose f(θ;x) is G-Lipschitz continuous and the PGD algorithm with privatized gradients defined by Algorithm 1, stops at step T . The PGD algorithm outputs θT and satisfies ρ-zCDP where ρ ≤ 12R.
Note that Theorem 3.1 allows σt to be different throughout iterations. Next we present a principled approach for deriving dynamic schedules optimized for the final loss f(θT ).
4 DYNAMIC POLICIES BY MINIMIZING UTILITY UPPER BOUNDS
To characterize the utility of the PGD, we adopt the Expected Excess Risk (EER), which notion is widely used for analyzing the convergence of random algorithms, e.g., (Bassily et al., 2014; Wang et al., 2017). Due to the presence of the noise and the limitation of learning iterations, optimization using private gradients is expected to reach a point with a higher loss (i.e., excess risk) as compared to the optimal solution without private protection. Define θ∗ = arg minθ f(θ), after Algorithm 1 is iterated for T times in total, the EER gives the expected utility degradation:
EER = Eν [f(θT+1)]− f(θ∗).
Due to the variety of loss function and complexity of recursive iterations, an exact EER with noise is intractable for most functions. Instead, we study the worst case scenario, i.e., the upper bound of the EER, and our goal is to minimize the upper bound. For consistency, we call the upper bound of EER divided by the initial error as ERUB. Since the analytical form of EER is either intractable or complicated due to the recursive iterations of noise, studying the ERUB is a convenient and tractable alternative. The upper bound often has convenient functional forms which are (1) sufficiently simple, such that we can directly minimize it, and (2) closely related to the landscape of the objective depending on both the training dataset and the loss function. As a consequence, it is also used in previous PGD literature (Pichapati et al., 2019; Wang et al., 2017) for choosing proper parameters. Moreover, we let ERUBmin be the achievable optimal upper bound by a specific choice of parameters, e.g., the σ and T .
In this paper, we consider the class of loss functions satisfying the Polyak-Lojasiewicz (PL) condition which bounds losses by corresponding gradient norms. It is more general than the m-strongly convexity. If f is differentiable and M -smooth, then m-strongly convexity implies the PL condition.
Definition 4.1 (Polyak-Lojasiewicz condition (Polyak, 1963)). For f(θ), there exists µ > 0 and for every θ, ‖∇f(θ)‖2 ≥ 2µ(f(θ)− f(θ∗)).
The PL condition helps us to reveal how the influence of step noise propagates to the final excess error, i.e., EER. Though the assumption was also used previously in Wang et al. (2017); Zhou et al. (2020), neither did they discuss the propagated influence of noise. In the following sections, we will show how the influence can tighten the upper bound in gradient descent and its momentum variant.
4.1 GRADIENT DESCENT METHODS
For the brevity of our discussion, we first define the following constants: 1
α ,
2RMN2
DG2 (f(θ1)− f(θ∗)), κ ,
M µ , and γ , 1− 1 κ , (2)
which satisfy κ ≥ 1 and γ ∈ [0, 1). Note that κ is the condition number of f(·) if f(·) is strongly convex. κ tends to be large if the function is sensitive to small differences in inputs, and 1/α tends to be large if more samples are provided and with a less strict privacy budget. The convergence of PGD under the PL condition has been studied for private (Wang et al., 2017) and non-private (Karimi et al., 2016; Nesterov & Polyak, 2006; Reddi et al., 2016) ERM. Below we extend the bound in (Wang et al., 2017) by considering dynamic influence of noise and relax σt to be dynamic: Theorem 4.1. Let α, κ and γ be defined in Eq. (2), and ηt = 1M . Suppose f(θ;xi) is G-Lipschitz and f(θ) is M -smooth satisfying the Polyak-Lojasiewicz condition. For PGD, the following holds:
ERUB = γT +R ∑T
t=1 qtσ
2 t , where qt , γ T−tα. (3)
In Eq. (3), the step noise magnitude σ2t has an exponential influence, qt, on the EER. The dynamic characteristic of the influence is the key to prove a tighter bound. Plus, on the presence of the dynamic influence, it is natural to choose a dynamic σ2t . When relaxing qt to a static 1, a static σ 2 t was studied by Wang et al. They proved a bound which is nearly optimal except a ln2N factor. To get the optimal bound, in the following sections, we look for the σ and T that minimize the upper bound.
4.1.1 UNIFORM SCHEDULE
The uniform setting of σt has been previously studied in Wang et al. (2017). Here, we show that the bound can be further tightened by considering the dynamic influence of iterations and a proper T . Theorem 4.2. Suppose conditions in Theorem 4.1 are satisfied. When σ2t = T/R, let α, γ and κ be defined in Eq. (2) and let T be:
T = ⌈ O ( κ ln ( 1 + 1
κα
))⌉ . (4)
Meanwhile, if κ ≥ 11−c > 1, 1/α > 1/α0 for some constant c ∈ (0, 1) and α0 > 0, the corresponding bound is:
ERUBuniformmin = Θ
( κ2
κ+ 1/α ln
( 1 + 1
κα
)) . (5)
Sketch of proof. The key of proof is to find a proper T to minimize ERUB = E = γT + ∑T
t=1 γT−tαRσ2 = γT + αT
1− γT
1− γ = γT + ακ(1− γT )T
where we use σt = √ T/R. Vanishing its gradient is to solve γT ln γ+ακ(1−γT )−ακTγT ln γ = 0, which however is intractable. In (Wang et al., 2017), T is chosen to be O(ln(1/α)) and ERUB is relaxed as γT + ακT 2. The approximation results in a less tight bound as O(α(1 + κ ln2(1/α))) which explodes as κ→∞. We observe that for a super sharp loss function, i.e., a large κ, any minor perturbation may result in tremendously fluctuating loss values. In this case, not-stepping-forward will be a good choice. Thus, we choose T = 1ln(1/γ) ln ( 1 + ln(1/γ)α ) ≤ O ( κ ln ( 1 + 1κα )) which converges to 0 as κ → +∞. The full proof is deferred to the appendix.
4.1.2 DYNAMIC SCHEDULE
A dynamic schedule can improve the upper bound delivered by the uniform schedule. First, we observe that the excess risk in Eq. (3) is upper bounded by two terms: the first term characterizes the error due to the finite iterations of gradient descents; the second term, a weighted sum, comes from error propagated from noise at each iteration. Now we show for any {qt|qt > 0, t = 1, . . . , T} (not limited to the qt defined in Eq. (3)), there is a unique σt minimizing the weighted sum:
Lemma 4.1 (Dynamic schedule). Suppose σt satisfy ∑T t=1 σ
−2 = R. Given a positive sequence {qt}, the following equation holds:
min σ R ∑T t=1 qtσ 2 t = (∑T t=1 √ qt )2 , when σ2t = 1 R ∑T i=1 √ qi qt . (6)
Remarkably, the difference between the minimum and T ∑T t=1 qt (uniform σt) monotonically in-
creases by the variance of √ qt w.r.t. t.
We see that the dynamics in σt come from the non-uniform nature of the weight qt. Since qt presents the impact of the σt on the final error, we denote it as influence. Given the dynamic schedule in Eq. (6), it is of our interest to which extent the ERUB can be improved. First, we present Theorem 4.3 to show the optimal T and ERUB. Theorem 4.3. Suppose conditions in Theorem 4.1 are satisfied. Let α, κ and γ be defined in Eq. (2). When ηt = 1M , σt (based on Eqs. (3) and (6)) and the T minimizing ERUB are, i.e.,
σ2t = 1
R √ (1/γ)T − 1 1−√γ √ γt, T = ⌈( 2κ ln ( 1 + 1 κα ))⌉ . (7)
Meanwhile, when κ ≥ 1 and 1/α ≥ 1/α0 for some positive constant α0, the minimal bound is:
ERUBdynamicmin = Θ
( κ2
κ2 + 1/α
) . (8)
4.1.3 DISCUSSION
In Theorems 4.2 and 4.3, we present the tightest bounds for functions satisfying the PL condition, to our best knowledge. We further analyze the advantages of our bounds from two aspects: sample efficiency and robustness to sharp losses.
Sample efficiency. Since dataset cannot be infinitely large, it is critical to know how accurate the model can be trained privately with a limited number of samples. Formally, it is of interest to study when κ is fixed and N is large enough such that α 1. Then we have the upper bound in Eq. (5) as
ERUBuniformmin ≤ O ( κ2α ln ( 1
κα
)) ≤ Õ ( DG2 ln(N)
MN2R
) , (9)
where we ignore κ and other logarithmic constants with Õ as done in Wang et al. (2017). As a result, we get a bound very similar to (Wang et al., 2017), except that R is replaced by RMA = 2/ ln(1/δ) using Moment Accountant. In comparison, based on Lemma B.3, R = 2ρ = 2 + 4 ln(1/δ) + 4 √ ln(1/δ)( + ln(1/δ) if θT satisfies ρ-zCDP. Because ln(1/δ) > 1, it is easy to see R = RzCDP > RMA when ≤ 2 ln(1/δ). As compared to the one reported in (Wang et al., 2017), our bound saved a factor of lnN and thus is require less sample to achieve the same accuracy. Remarkably, the saving is due to the maintaining of the influence terms as shown in the proof of Theorem 4.2.
Using the dynamic schedule, we have ERUBdynamicmin ≤ O(α) = O ( DG2 MN2R ) , which saved another lnN factor in comparison to the one using the uniform schedule Eq. (9). As shown in Table 1, such advantage maintains when comparing with other baselines.
Robustness. Besides sample efficiency, we are also interested in robustness of the convergence under the presence of privacy noise. Because of the privacy noise, the convergence of private gradient descent will be unable to reach an ideal spot. Specifically, when the samples are noisy or have noisy labels, the loss curvature may be sharp. The sharpness also implies lower smoothness, i.e., a small M or has a very small PL parameter. Thus, gradients may change tremendously at some steps especially in the presence of privacy noise. As illustrated in the left figure, the highly-curved loss function (the green curve) results in mean higher final loss (the red dashed line) than the flatten curve (purple and blue lines). Such changes have more critical impact when only a less number of iterations can be executed due to the privacy constraint. Assume α is some constant while κ 1/α, we immediately get:
ERUBuniformmin = Θ
( κ ln ( 1 + 1
κα
)) = Θ ( 1
α
) ≤ O ( MN2R
DG2
) , ERUBdynamicmin = Θ(1).
Both are robust, but the dynamic schedule has a smaller factor since 1/α could be a large number. In addition, the factor implies that when more samples are used, the dynamic schedule is robuster.
4.2 GRADIENT DESCENT METHODS WITH MOMENTUM
Section 4.1 shows that the step noise has an exponentially increasing influence on the final loss, and therefore a decreasing noise magnitude improves the utility upper bound by a lnN factor. However, the proper schedule can be hard to find when the curvature information, e.g., κ, is absent. A parameterized method that less depends on the curvature information is preferred. On the other hand, long-term iterations will result in forgetting of the initial iterations, since accumulated noise overwhelmed the propagated information from the beginning. This effect will reduce the efficiency of the recursive learning frameworks.
Alternative to GD, the momentum method can mitigate the two issues. It was originally proposed to stabilize the gradient estimation (Polyak, 1964). In this section, we show that momentum (agnostic about the curvature) can flatten the dynamic influence and improve the utility upper bound. Previously, Pichapati et al. used the momentum as an estimation of gradient mean, without discussions of
convergence improvements. Zhou et al. gave a bound for the Adam with DP. However, the derivation is based on gradient norm, which results in a looser bound (see Table 1).
The momentum method stabilizes gradients by moving average history coordinate values and thus greatly reduces the variance. The φ(mt, gt) can be rewritten as:
mt+1 = φ(mt, gt) = vt+1
1− βt , vt+1 = βvt + (1− β)gt = (1− β) ∑t i=1 βt−igt, v1 = 0, (10)
where β ∈ [0, 1]. Note vt+1 is a biased estimation of the gradient expectation while mt+1 is unbiased. Theorem 4.4 (Convergence under PL condition). Suppose f(θ;xi) is G-Lipschitz, and f(θ) is M - smooth and satisfies the Polyak-Lojasiewicz condition. Assume β 6= γ and β ∈ (0, 1). Let ηt = η02M and η0 ≤ 8 (√ 1 + 64βγ(γ − β)−2(1− β)−3 + 1 )−1 . Then the following holds:
EER ≤ ( γT + 2Rη0α U3(σ, T )︸ ︷︷ ︸
noise varinace
) (f(θ1)− f(θ∗))− ζ
η0 2M ∑T t=1
γT−tE ‖vt+1‖2︸ ︷︷ ︸ momentum effect
(11)
where γ = 1− η0 κ , ζ = 1− 1 β(1− β)3 η20 − 1 4 η0 ≥ 0, (12)
U3 = ∑T
t=1 γT−t
(1− β)2 (1− βt)2 ∑t i=1 β2(t−i)σ2i . (13)
The upper bound includes three parts that influence the bound differently: (1) Convergence. The convergence term is mainly determined by η0 and κ. η0 should be in (0, κ) such that the upper bound can converge. A large η0 will be preferred to speed up convergence if it does not make the rest two terms worse. (2) Noise Variance. The second term compressed in U3 is the effect of the averaged noise, ∑t i=1 β
2(t−i)σ2i . One difference introduced by the momentum is the factor (1− β)/(1− βt) which is less than γt at the beginning and converges to a non-zero constant 1− β. Therefore, in U3, γT−t(1− β)/(1− βt) will be constantly less than γT meanwhile. Furthermore, when t > T̂ , the moving average ∑t i=1 β
2(t−i)σ2i smooths the influence of each σt. In Appendix D, we will see that the influence dynamics is less steep than that of GD. (3) Momentum Effect. The momentum effect term can improve the upper bound when η0 is small. For example, when β = 0.9 and γ = 0.99, then η0 ≤ 0.98/M which is a rational value. Following the analysis, when M is large which means the gradient norms will significantly fluctuate, the momentum term may take the lead. Adjusting the noise scale in this case may be less useful for improving utility.
To give an insight on the effect of dynamic schedule, we provide the following utility bounds.
Theorem 4.5 (Uniform schedule). Suppose the assumptions in Theorem 4.4 are satisfied. Let σ2t = T/R, and let:
T̂ = max t s.t. γt−1 ≥ 1− β 1− βt , T =
⌈ O ( κ η0 ln ( 1 + η0 κα ))⌉ .
Given some positive constant c and α0 > 0 with 1/α > 1/α0, the following inequality holds: ERUBmin ≤ O ( κ2
κ+ η0/α
[ IT≤T̂ + γ T̂−1 ln ( 1 + η0 κα ) IT>T̂ ]) .
Theorem 4.6 (Dynamic schedule). Suppose the assumptions in Theorem 4.4 are satisfied. Let α′ = 2η0αγ(1−γβ2) , β < γ and T̂ = max t s.t. γ t−1 ≥ 1−β1−βt . Use the following schedule:
σ2t = 1
R ∑T i=1 √ qi qt , T dyn = ⌈ O ( 2κ η0 ln ( 1 + η0 κα ))⌉ ,
where qt = c1γT+tIT≤T̂ + γ T̂−1c2γ T−tIT>T̂ for some positive constants c1 and c2. The following inequality holds:
ERUB ≤ γT + 2η0α ∑T
t=1 Rqtσ
2 t , ERUBmin ≤ O
( κα
κα+ η0
( κα
κα+ η0 IT≤T̂ + IT>T̂
)) .
Discussion. Theoretically, the dynamic schedule is more influential in vanilla gradient descent methods than the momentum variant. The result is mainly attributed to the averaging operation. The moving averaging, (1 − β) ∑t i=1 β
t−igi/(1 − βt), increase the influence of the under-presented initial steps and decrease the one of the over-sensitive last steps. Counterintuitively, the preferred dynamic schedule should be increasing since qt decreases when t ≤ T̂ .
4.3 PRIVATE STOCHASTIC GRADIENT DESCENT (NEW SECTION ON REBUTTAL)
Though PGD provides a guarantee both for utility and privacy, computing gradients of the whole dataset is impractical for large-scale problems. For this sake, studying the convergence of Private Stochastic Gradient Descent (PSGD) is meaningful. The Algorithm 1 can be easily extended to PSGD by subsampling n gradients where the batch size n N . According to (Yu et al., 2019), when privacy is measured by zCDP, there are two ways to account for the privacy cost of PSGD depending on the batch-sampling method: sub-sampling with or without replacement. In this paper, we focus on the random subsampling with replacement since it is widely used in deep learning in literature, e.g., (Abadi et al., 2016; Feldman et al., 2020). Accordingly, we replace N in the definition of α by n because the term is from the sensitivity of batch data (see Eq. (1)). For clarity, we assume that T is the number of iterations rather than epochs and that ∇̃t is mean stochastic gradient. When a batch of data are randomly sampled, the privacy cost of one iteration is cp2/σt where c is some constant, p = n/N is the sample rate, and 1/σ2t is the full-batch privacy cost. Details of the sub-sampling theorems are referred to the Theorem 3 of (Yu et al., 2019) and their empirical setting. Threfore, we can replace the privacy constraint ∑ t p 2/σ2t = R by ∑ t 1/σ 2 t = R
′ where R′ = R/p2 = N 2
n2 R. Remarkably, we omit the constant c because it will not affect the results regarding uniform or dynamic schedules. Notice N2R in the α is replaced by n2R′ = N2R. Thus, the form of α is not changed which provides convenience for the following derivations.
Now we study the utility bound of PSGD. To quantify the randomness of batch sampling, we define a random vector ξt with E[ξt] = 0 and E ‖ξt‖2 ≤ D such that ∇̃t ≤ ∇t + σgξt/n for some positive constant σg. Because ξt has similar property to the privacy noise νt, we can easily extend the PGD bounds to PSGD bounds by following theories. Theorem 4.7 (Utility bounds of PSGD). Let α, κ and γ be defined in Eq. (2), and ηt = 1M . Suppose f(θ;xi) is G-Lipschitz and f(θ) is M -smooth satisfying the Polyak-Lojasiewicz condition. For PSGD, when batch size satisfies n = max{N √ R, 1}, the following holds:
ERUB = γT + αgσ 2 g +R
′ ∑T
t=1 qtσ
2 t , where qt , γ T−tα, ∑ t 1/σ2t = R ′. (14)
where αg = D2µN2R(f(θ1)−f(θ∗)) .
Theorem 4.8 (PSGD with momentum). Let αg = D2µN2R(f(θ1)−f(θ∗)) . Suppose assumptions in Theorem 4.4 holds. When batch size satisfies n = max{N √ R, 1}, the U3(σ, T ) has to be replaced by Ũ3 = U g 3 + U3, with αR
′Ug3 ≤ αgσ2g (15) when PSGD is used.
As shown above, the utility bound of PSGD differs from the PGD merely by αgσ2g . Note αg = O( DN2R ) which fits the order of dynamic-schedule bounds. In addition, α and other variables are not changed. Hence, the conclusions w.r.t. the dynamic/uniform schedules maintain the same.
5 EXPERIMENTS
We empirically validate the properties of privacy schedules and their connections to learning algorithms. In this section, we briefly review the schedule behavior on quadratic losses under varying data sensitivity. Details of experimental setups and empirical results are available in Appendix D.
We first show the estimated influence of step noise qt (by retraining the private learning algorithms, ref. Appendix D) in Fig. 2 Left. We see the trends of influence are approximately in an exponential form of t. This obvervation motivates the use of exponential decay schedule in practice.
We then show the trends on the variance of influence (dashed lines with the right axis) and relative final losses (solid lines with the left axis) in the Middle Pane, where uni denotes the uniform schedule baseline, exp is an exponential schedule, dyn denotes the dynamic schedule minimizing the ERUB. The influences increases steeply when the data scale is large and therefore have a large variance. Meanwhile, dynamic schedules show improvements of the final loss when the variance is large. It reveals the connection between the influence and the dynamic advantage (refer to Lemma 4.1).
We lastly evaluate the impacts from momentum in the Right Pane, using a Deep Neural Network (DNN) with 2 layers and 100 hidden units. Because of time costs of training deep networks, we do not estimate the influence by retraining and then compute schedules. Instead, we grid-search for the schedule hyper-parameters to find the best one. We see that influence modeled by an exponential function (expinfl) has comparable performance of the influence modeled by linear combination of two reverse exponential functions (momexpinfl). The latter only shows advantage in the setting that data scale is 25 and the number of iteration is only 100, which is expected by our analysis Theorems 4.5 and 4.6. The inherent reason is that the dynamic schedule is more effective when T is larger.
6 CONCLUSION
When a privacy budget is provided for a certain learning task, one has to carefully schedule the privacy usage through the learning process. Uniformly scheduling the budget has been widely used in literature whereas increasing evidence suggests that dynamically schedules could empirically outperform the uniform one. This paper provided a principled analysis on the problem of optimal budget allocation and connected the advantages of dynamic schedules to both the loss structure and the learning behavior. We further validated our results through empirical studies.
A COMPARISON OF ALGORITHMS
We present Table 2 as an sumpplementary to the Table 1. Asymptotic upper bounds are achieved when sample size N approaches infinity. Both R and R ,δ with R ,δ < R are the privacy budgets of corresponding algorithms. Specifically, R ,δ = 2/ ln(1/δ) < R when the private algorithm is ( , δ)-DP with ≤ 2 ln(1/δ). PGD+Adv. Adv denotes the Advanced Composition method (Bassily et al., 2014). The method assumes that loss function is 1-strongly convex which implies the PL condition and optimized variable is in a convex set of diameter 1 w.r.t. l2 norm.
PGD+MA. MA denotes the Moment Accoutant (Abadi et al., 2016) which improve the composed privacy bound versus the Advanced Composition. The improvement on privacy bound lead to a enhanced utility bound, as a result.
PGD+Adv+BBImp. The dynamic method assumes that the loss is 1-strongly convex and data comes in stream with n ≤ N samples at each round. Their utility upper bound is achieved at some probability p with any positive c.
Adam+MA. The authors prove a convergence bound for the gradient norms which is extended to loss bound by using PL condition. They also presents the results for AdaGrad and GD which are basically of the same upper bound. Out theorems improve their bound by using the recursive derivation based on the PL condition, while their bound is a simple application of the condition on the gradient norm bound.
GD, Non-Private. This method does not inject noise into gradients but limit the number of iterations. With the bound, we can see that our utility bound are optimal with dynamic schedule.
GD+zCDP. We discussed the static and dynamic schedule for the gradient descent method where the dynamic noise influence is the key to tighten the bound.
Momemtum+zCDP. Different from the GD+zCDP, momentum methods will have two phase of utility upper bound. When T is small than some positive constant T̂ , the bound is as tight as the non-private one. Afterwards, the momentum has a bound degraded as the GD bound.
A.1 COMAPRISON OF GENERALIZATION BOUNDS
In addition to the empirical risk bounds in Table 2, in this section we study the true risk bounds, or generalization error bounds. True risk bounds characterize how well the learnt model can generalize to unseen samples subject to the inherent data distribution. By leveraging the generic learning-theory tools, we extend our results to the True Excess Risk (TER) for strongly convex functions as follows. For a model θ, its TER is defined as follows:
TER , Ex∼X [E[f(θ;x)]]−minθ̂ Ex∼X [f(θ̂;x)], where the second expectation is over the randomness of generating θ (e.g., the noise and stochastic batches). Assume a dataset d consist of N samples drawn i.i.d. from the distribution X . Two approaches could be used to extend the empirical bounds to the true excess risk: One is proposed by Shalev-Shwartz et al. (2009) where the true excess risk of PGD can be bounded in high probability. For example, Bassily et al. (2014) achieved a ln
2N N bound with N 2 iterations. Alternatively, instead of relying on the probabilistic bound, Bassily et al. (2019) used the uniform stability to give a tighter bound. Later, Feldman et al. (2020) improve the efficiency of gradient computation to achieve a similar bound. Both approaches introduce an additive term to the empirical bounds. In this section, we adopt both approaches to investigate the two types of resulting true risk bounds.
(1) True Risk in High Probability. First, we consider the high-probability true risk bound. Based on Section 5.4 from (Shalev-Shwartz et al., 2009) (restated in Theorem A.1), we can relate the EER to the TER. Theorem A.1. Let f(θ;x) be G-Lipschitz, and f(θ) be µ-strong convex loss function given any x ∈ X . With probability at least 1− p over the randomness of sampling the data set d, the following inequality holds:
TER(θ) ≤
√ 2G2
µN
√ f(θ)− f(θ∗) + 4G 2
pµN , (16)
where θ∗ = arg minθ f(θ).
To apply the Eq. (16), we need to extend EER, the expectation bound, to a high-probability bound. Following (Bassily et al., 2014) (Section D), we repeate the PGD with privacy budgetR/k for k times. Note, the output of all repetitions is still of R budget. When k = 1, let the EER of the algorithm be denoted as F (R). Then the EER of one execution of the k repetitions is F (R/k) where privacy is accounted by zCDP. When k = log2(1/p) for p ∈ [0, 1], by Markov’s inequality, there exists one repetition whose EER is F (R/ log2(1/p)) with probability at least 1 − 1/2k = 1 − p. Combined with Eq. (16), we use the bounds of uniform schedule and dynamic schedules in Section 4.1.3 to obtain:
TERuniform ≤ Õ
( G2
µN
(√ D ln(N) ln(1/p)
NR +
4
p
)) , (17)
TERdynamic ≤ Õ
( G2
µN
(√ D ln(1/p)
NR +
4
p
)) , (18)
where we again ignore the κ and other constants. Similarly, we can extend the momentum methods.
(2) True Risk by Unfirom Stability. Following Bassily et al. (2019), we use the uniform stability (defined in Definition A.1) to extend the empirical bounds. We restate the related definition and theorems as follows. Definition A.1 (Uniform stability). Let s > 0. A randomized algorithm M : DN → Θ is suniformly stable w.r.t. the loss function f if for any neighor datasets d and d′, we have: supx∈X E[f(M(d);x)− f(M(d′);x)] ≤ s, where the expectation is over the internal randomness ofM. Theorem A.2 (See, e.g., (Shalev-Shwartz & Ben-David, 2014)). Suppose M : DN → Θ is a s-uniformly stable algorithm w.r.t. the loss function f . LetD be any distribution from over data space and let d ∼ DN . The following holds true.
Ed∼DN [E[f(M(d);D)− f(M(d); d)]] ≤ s, where the second expectation is over the internal randomness ofM. f(M(d);D) and f(M(d); d) represent the true loss and the empirical loss, respecitvely. Theorem A.3 (Uniform stability of PGD from (Bassily et al., 2019)). Suppose η < 2/M for M smooth, G-Lipschitz f(θ;x). Then PGD is s-uniformly stable with s = G2Tη/N .
Combining Theorems A.2 and A.3, we obtain the following:
TER ≤ EER +G2 ηT N .
Because EER in this paper compresses a γT or similar exponential terms, unlike (Bassily et al., 2019), we cannot directly minimize the TER upper bound w.r.t. T and η in the presence of a polynomial form of γT and T . Therefore, we still use T = O(ln N
2R D ) and η for minimizing EER. Note that
G2 ηT N ≤ O( G
2 MN ln N2R D ) ≤ O
( G2
M ) where we assume N D and use lnN ≤ N . Because the term O ( G2/M ) is constant and independent from dimension, we follow (Bassily et al., 2019) to drop the term when comparing the bounds. After dropping the additive term, it is obvious to see that the advantage of dynamic schedules still maintains since TER ≤ EER. A similar extension can be derived for (Wang et al., 2017). We summarize the results and compare them to prior works in Table 3 where we include an additional method: Snowball Stochastic Gradient Descent (SSGD). SSGD dynamically schedule the batch size to achieve an optimal convergence rate in linear time.
Discussion. By using uniform stability, we successfully transfer the advantage of our dynamic schedules from empirical bounds to true risk bounds. The inherent reason is that our bounds only need lnN iterations to reach the preferred final loss. With uniform stability, the logarithmic T reduce the gap caused by transferring. Compared to the (Feldman et al., 2020; Bassily et al., 2019), our method has remarkably improved efficiency in T from N or N2 to ln(N). That implies fewer iterations are required for converging to the same generalization error.
B PRELIMINARIES
B.1 PRIVACY
Lemma B.1 (Composition & Post-processing). Let two mechanisms be M : Dn → Y and M ′ : Dn × Y → Z . Suppose M satisfies (ρ1, a)-zCDP and M ′(·, y) satisfies (ρ2, a)-zCDP for ∀y ∈ Y . Then, mechanism M ′′ : Dn → Z (defined by M ′′(x) = M ′(x,M(x))) satisfies (ρ1 + ρ2)-zCDP. Definition B.1 (Sensitivity). The sensitivity of a gradient query∇t to the dataset {xi}Ni=1 is
∆2(∇t) = max n ∥∥∥∥ 1N ∑Nj=1,j 6=n∇(j)t − 1N ∑Nj=1∇(j)t ∥∥∥∥ 2 = 1 N max n ∥∥∥∇(n)t ∥∥∥ 2
(19)
where∇(n)t denotes the gradient of the n-th sample. Lemma B.2 (Gaussian mechanism (Bun & Steinke, 2016)). Let f : Dn → Z have sensitivity ∆. Define a randomized algorithm M : Dn → Z by M(x)← f(x) +N (0,∆2σ2I). Then M satisfies 1 2σ2 -zCDP. Lemma B.3 ((Bun & Steinke, 2016)). If M is a mechanism satisfying ρ-zCDP, then M is (ρ + 2 √ ρ ln(1/δ), δ)-DP for any δ > 0. By solving ρ+ 2 √ ρ ln(1/δ) = , we can get ρ = + 2 ln(1/δ) + 2 √ ln(1/δ)( + ln(1/δ).
B.2 AUXILIARY LEMMAS
Lemma B.4. If maxn ‖xn‖2 = 1 and 1 N ∑ n xn = 0, then the gradient sensitivity of the squared loss will be
∆2(∇) = max i
1
N
√ 2f(θ;xi) ‖xi‖2 ≤ 1
2 (DM ‖θ‖2 + 1),
where ΘM is the set of all possible parameters θt generated by the learning algorithmM.
Proof. According to the definition of sensitivity in Eq. (19), we have
∆2(∇) = max i ∥∥∥∇(i)∥∥∥ 2 = max n 1 n ∥∥∥A(i)θ − xi∥∥∥ 2
where we use i denotes the index of sample in the dataset. Here, we assume it is constant 1. We may get ∥∥∥A(i)θ − xi∥∥∥2 2 = ∥∥xi(x>i θ − 1)∥∥22 = (x>i θ − 1)2 ‖xi‖22 = 2f(θ;xi) ‖xi‖22
where f(θ;xi) = 12 (x > i θ − 1)2. Thus,
∆2(∇) = max i
1
N
√ 2f(θ;xi) ‖xi‖2
Since ‖xn‖2 ≤ 1 and 1 N ∑N n=1 xn = 0,
f(θ) = 1
2N N∑ n=1 [(x>n θ) 2 − 2x>n θ + 1] ≤ 1 2N N∑ n=1 [(‖xn‖ ‖θ‖)2 + 1] ≤ 1 2 (DM ‖θ‖2 + 1)
Lemma B.5. Assume assumptions in Theorem 4.4 are satisfied. Given variables defined in Theorem 4.4, the following inequality holds true:
T∑ t=1 γT−t 2(1− β)ηt bt ∑t i=1 βt−i ‖∇t −∇i‖2 ≤ η30βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−i ‖vi+1‖2 .
Proof. We first handle the inner summation. By smoothness, the inequality ‖∇f(x)−∇f(y)‖ ≤ M ‖x− y‖ holds true. Thus,
∑t i=1 βt−i ‖∇t −∇i‖2 ≤M2 ∑t i=1 βt−i ‖θt − θi‖2
= M2 ∑t−1
k=0 βk ‖θt − θt−k‖2
= M2 ∑t−1 k=0 βk ∥∥∥∥∑t−1i=t−k ηivi+1/bi ∥∥∥∥2 ≤M2 ∑t−1 k=0 βk (∑t−1 j=t−k η2j /b 2 j )(∑t−1 i=t−k ‖vi+1‖2 )
where the last inequality is by Cauchy-Schwartz inequality. Because 1bt = 1 1−βt ≤ 1 1−β and ηt = η0 2M ,
∑t i=1 βt−i ‖∇t −∇i‖2 ≤ η20 4(1− β)2 ∑t−1 k=0 βkk ∑t−1 i=t−k ‖vi+1‖2
= η20 4(1− β)2 ∑t−1 k=0 βkk ∑t−1 i=1 ‖vi+1‖2 I(i ≥ t− k)
= η20 4(1− β)2 ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=0 βkkI(k ≥ t− i)
= η20 4(1− β)2 ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i βkk (20)
where I(·) is the indicating function which output 1 if the condition holds true, otherwise 0. Denote the left-hand-side of the conclusion as LHS. We plug Eq. (20) into LHS to get
LHS ≤ T∑ t=1 γT−t 1 bt η30 4M(1− β) ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i βkk
≤ η 3 0 4M(1− β)2 T∑ t=1 γT−t ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i βkk
where we relax the upper bound by 1bt = 1 1−βt ≤ 1 1−β . Using Lemma B.6 can directly lead to the conclusion:
LHS ≤ η 3 0βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−i ‖vi+1‖2 .
Lemma B.6. Given variables defined in Theorem 4.4, the following inequality holds true:
T∑ t=1 γT−t ∑t−1 i=1 ‖vi+1‖2 ∑t−1 k=t−i kβk ≤ 2βγ (γ − β)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2 .
Proof. We first derive the summation:
U1(t, i) , ∑t−1
k=t−i βkk = ∑t−1 k=t−i ∑k j=1 βk
= ∑t−1
k=t−i ∑t−1 j=1 βkI(j ≤ k)
= ∑t−1
j=1 ∑t−1 k=max(t−i,j) βk
= ∑t−1
j=1
βmax(t−i,j) − βt
1− β
= 1
1− β
( (t− i)βt−i + β t−i+1 − βt
1− β − β − β t 1− β ) = 1
1− β
( (t− i)βt−i + β
t−i+1 − β 1− β
)
Now, we substitute U1(t, i) into LHS and replace t− i by j, i.e., t = j + i, to get
LHS = T∑ t=1 γT−t t−1∑ i=1 ‖vi+1‖2 1 1− β ( (t− i)βt−i + β t−i+1 − β 1− β )
= T−1∑ i=1 ‖vi+1‖2 T∑ t=i+1 γT−t 1 1− β ( (t− i)βt−i + β t−i+1 − β 1− β )
= T−1∑ i=1 ‖vi+1‖2 T−i∑ j=1 γT−(j+i) 1 1− β ( jβj + βj+1 − β 1− β )
= T−1∑ i=1 γT−i ‖vi+1‖2 T−i∑ j=1 γ−j 1 1− β ( jβj + βj+1 − β 1− β )
≤ 1 1− β T−1∑ i=1 γT−i ‖vi+1‖2 T−i∑ j=1
( j ( β
γ
)j + β
1− β
( β
γ
)j)
Let a = β/γ, we show
T−i∑ j=1 jaj = T−i∑ j=1 j∑ o=1 aj
= T−i∑ o=1 T−i∑ j=o aj
= T−i∑ o=1 ( ao − aT−i+1 1− a ) = a− aT−i+1
(1− a)2 − (T − i)a
T−i+1
1− a
≤ a (1− a)2 .
Thus,
LHS ≤ 1 1− β T−1∑ i=1
γT−i ‖vi+1‖2 a
(1− a)2 +
β
1− β T−i∑ j=1 aj ≤ 1
1− β T−1∑ i=1 γT−i ‖vi+1‖2 (
a
(1− a)2 +
β 1− β a 1− a
)
≤ a (1− a)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2
Because γ < 1, β < a = β/γ and
a
(1− a)2 +
β 1− β a 1− a ≤ 2a (1− a)2 .
Therefore,
LHS ≤ 2a (1− a)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2 = 2βγ (γ − β)2(1− β) T−1∑ i=1 γT−i ‖vi+1‖2
Lemma B.7. Suppose γ ∈ (0, 1) and β ∈ (0, 1). Define
T̂ = max t s.t. γt−1 ≥ 1− β 1− βt .
If t ≤ T̂ , 1−β1−βt ≤ γ t−1 for t = 1, . . . , T . If t > T̂ , 1−β1−βt < γ T̂−1.
Proof. Define h(t) = γt−1(1− βt) whose derivatives are
h′(t) = γt−1(1− βt) ln γ + γt−1(−βt) lnβ = γt−1 [ ln γ − βt(ln γ + lnβ) ] = γt−1 [ 1− βt(1 + logγ β) ] ln γ.
Simple calculation shows 1− βt(1 + logγ β) ∣∣ t=0
= − logγ β < 0 and limt→+∞ 1 − βt(1 + logγ β) = 1. When t = − logβ(1 + logγ β) denoted as t0, 1 − βt(1 + logγ β) = 0. Because 1 − βt(1 + logγ β) is monotonically increasing by t and γt−1 ln γ is negative, h′(t) ≥ 0 if t ≤ t0. Otherwise, h′(t) < 0. Therefore, h(t) is a concave function. Because h(1) = 1− β and h(T̂ ) = γT̂−1(1 − βT̂ ) ≥ 1 − β > 0, h(t) ≥ 1 − β for t = 1, . . . , T̂ . Thus, for all t ∈ [1, T̂ ], we have 1−β1−βt ≤ γ t−1.
For t > T̂ , because 1−β1−βt monotonically increases by t, we have 1−β 1−βt < 1−β 1−βT̂ ≤ γT̂−1.
C PROOFS
Proof of Theorem 3.1. Because all sample gradient are G-Lipschitz continuous, the sensitivity of the averaged gradient is upper bounded by G/N . Based on Lemma B.2, the privacy cost of gt is 12σ2t 1.
Here, we make the output of each iteration a tuple of (θt+1, vt=1). For the 1st iteration, because θ1 does not embrace private information by random initialization, the mapping,[
v2 θ2
] = [ g1
θ1 − η1g1
] ,
1For brevity, when we say the privacy cost of some value, e.g., gradient, we actually refer to the cost of mechanism that output the value.
is ρ̂1-zCDP where ρ̂1 = 12σ2t .
Suppose the output of the t-th iteration, (θt, vt), is ρ̂t-zCDP. At each iteration, we have the following mapping (θt, vt)→ (θt+1, vt+1) defined as[
vt+1 θt+1
] = [ φ(vt, gt)
θt − ηtφ(vt, gt)
] .
Thus, the output tuple (θt+1, vt+1) is (ρ̂t + 12σ2t )-zCDP by Lemma B.1.
Thus, the recursion implies that (θT+1, vT+1) has privacy cost as
ρ̂T+1 = ρ̂T + 1
2σ2T = · · · = T∑ t=1 1 2σ2t = 1 2 T∑ t=1 ρt ≤ 1 2 (R−RT ) ≤ 1 2 R.
Let ρ = ρ̂T+1. Then we can get the conclusion.
C.1 GRADIENT DESCENTS
Proof of Theorem 4.1. With the definition of smoothness in Definition 3.3 and Eq. (1), we have
f(θt+1)− f(θt) ≤ −ηt∇>t (∇t +Gσtνt/N) + 1
2 Mη2t ‖∇t +Gσtνt/N‖
2
= −ηt(1− 1
2 Mηt) ‖∇t‖2 − (1−Mηt)ηt∇>t Gσtνt/N +
1 2 Mη2t ‖Gσtνt/N‖ 2
≤ −2µηt(1− 1
2 Mηt)(f(θt)− f(θ∗))− (1−Mηt)ηt∇>t Gσtνt/N
+ 1
2 Mη2t ‖Gσtνt/N‖ 2 .
where the last inequality is due to the Polyak-Lojasiewicz condition. Taking expectation on both sides, we can obtain
E[f(θt+1)]− E[f(θt)] ≤ −2µηt(1− M
2 ηt)(E[f(θt)]− f(θ∗)) +
M
2 (ηtGσt/N)
2E ‖νt‖2
which can be reformulated by substacting f(θ∗) on both sides and re-arranged as E[f(θt+1)]− f(θ∗) ≤ ( 1− 2µηt(1− M
2 ηt)
) (E[f(θt)]− f(θ∗)) + M
2 (ηtGσt/N)
2D
Recursively using the inequality, we can get
E[f(θT+1)]− f(θ∗) ≤ T∏ t=1 ( 1− 2µηt(1− M 2 ηt) ) (E[f(θ1)]− f(θ∗))
+ MD
2 T∑ t=1 T∏ i=t+1 ( 1− 2µηi(1− M 2 ηi) ) (ηtGσt/N) 2.
Let ηt ≡ 1/M . Then the above inequality can be simplified as
E[f(θT+1)]− f(θ∗) ≤ γT (E[f(θ1)]− f(θ∗)) +R T∑ t=1 γT−t MD 2R ( ηtG N )2 σ2t
= γT (E[f(θ1)]− f(θ∗)) +R T∑ t=1 γT−tασ2t (E[f(θ1)]− f(θ∗))
= ( γT +R ∑T t=1 qtσ 2 t ) (f(θ1)− f(θ∗))
Proof of Theorem 4.2. The minimizer of the upper bound of Eq. (3) can be written as
T ∗ = arg min T γT + ακ(1− γT )T (21)
where we substitute σ2 = T/R in the second line. To find the convex minimization problem, we need to vanishing its gradient which involves an equation like TγT = c for some real constant c. However, the solution is Wk(c) for some integer k where W is Lambert W function which does not have a simple analytical form. Instead, because γT > 0, we can minimize a surrogate upper bound as following
T ∗ = arg min T
γT + ακT = 1
ln(1/γ) ln
( ln(1/γ)
κα
) , if κα+ ln γ < 0 (22)
where we use the surrogate upper bound in the second line and utilize γ = 1 − 1κ . However, the minimizer of the surrogate objective is not optimal for the original objective. When κ is large, the term, −ακγTT , cannot be neglected as we expect. On the other hands, T suffers from explosion if κ→∞ and meanwhile 1/γ →+ 1. The tendency is counterintuitive since a small T should be taken for sharp losses. To fix the issue, we change the form of T ∗ as
T ∗ = 1
ln(1/γ) ln
( 1 + ln(1/γ)
α
) , (23)
which gradually converges to 0 as κ→∞. Now we substitute Eq. (23) into the original objective function, Eq. (21), to get
ERUBuniform = 1
1 + ln(1/γ)/α
[ 1 + κ ln ( 1 + ln(1/γ)
α
)] . (24)
Notice that
ln(1/γ) = ln(κ/(κ− 1)) = ln(1 + 1/(κ− 1)) ≤ 1 κ− 1 ≤ 1 cκ
because κ ≥ 11−c > 1 for some constant c ∈ (0, 1). In addition,
ln(1/γ) = − ln(1− 1/κ) ≥ 1/κ.
Now, we can get the upper bound of Eq. (24) as
ERUBuniform ≤ κ κ+ 1/α
[ 1 + κ ln ( 1 + 1
cκα )] ≤ c1 κ
κ+ 1/α κ
[ ln ( 1 + 1
κα
) + ln( 1
c )) ] ≤ c1c2 κ2
κ+ 1/α ln
( 1 + 1
κα ) for some constants c1, c2 and large enough 1α . Also, we can get the lower bound
ERUBuniform ≥ cκ cκ+ 1/α
[ 1 + κ ln ( 1 + 1
κα
)] ≥ c κ 2
κ+ 1/α ln
( 1 + 1
κα
) .
where we use the condition c ∈ (0, 1). Thus, ERUBuniform = Θ ( κ2 κ+1/α ln ( 1 + 1κα )) .
Proof of Lemma 4.1. By ∑T t=1 σ
−2 = R and Cauchy-Schwarz inequality, we can derive the achievable lower bound as
R ∑ t qtσ 2 t = ∑ t 1 σ2t ∑ t qtσ 2 t ≥ ( T∑ t=1 √ qt )2
where the inequality becomes equality if and only if s/σ2t = qtσ 2 t , i.e., σt = (s/qt) 1/4, for some positive constant s. The equality ∑T t=1 σ −2 t = R immediately suggests √ s = 1R ∑T t=1 √ qt. Thus, we get the σt.
Notice
T T∑ t=1 qt − (∑T t=1 √ qt )2 = T 2 1 T T∑ t=1 ( √ qt − 1 T T∑ i=1 √ qi )2 = T 2 Var[qt] (25)
where the variance is w.r.t. t.
Proof of Theorem 4.3. The upper bound of Eq. (3) can be written as
ERUBdyn = γT + ∑T
t=1 γT−tαRσ2
= γT + α (∑T t=1 √ γT−t )2 = γT + α ( 1− γT/2
1−√γ )2 where we make use of Lemma 4.1. Then, the minimizer of the ERUB is
T ∗ = arg min T γT + α
( 1− γT/2
1−√γ )2 = 2 logγ ( α
α+ (1−√γ)2
) . (26)
We can substitute Eq. (26) into ERUBdyn to get
ERUBdynmin =
( α
α+ (1−√γ)2
)2 + α ( 1
1−√γ
)2( 1− α
α+ (1−√γ)2 )2 = ( α(1−√γ)−2
α(1−√γ)−2 + 1
)2 +
α(1−√γ)−2( α(1−√γ)−2 + 1 )2 = α(1−√γ)−2
α(1−√γ)−2 + 1
Notice that ( 1−√γ )−2 = κ2 + κ2 − κ + 2κ √ κ(κ− 1) = κ(2κ − 1 + 2 √ κ(κ− 1)) and it is bounded by
κ(2κ− 1 + 2 √ κ(κ− 1)) ≤ 4κ2,
κ(2κ− 1 + 2 √ κ(κ− 1)) ≥ κ(2κ− (3κ− 2) + 2 √ (κ− 1)(κ− 1)) = κ(−κ+ 2 + 2κ− 2) = κ2. Therefore, κ ≤ ( 1−√γ
)−1 ≤ 2κ, with which we can derive ERUBdynmin ≤ 4 κ2α
κ2α+ 1 ,
ERUBdynmin ≥ κ2α 4κ2α+ 1 ≥ 1 4 κ2α κ2α+ 1 .
Thus, ERUBdynmin = Θ ( κ2α κ2α+1 ) .
C.2 GRADIENT DESCENTS WITH MOMENTUM
Proof of Theorem 4.4. Without loss of generality, we absorb the Cσt/N into the variance of νt such that νt ∼ N (0, Cσ 2 t N I) and gt ← ∇t + νt. Define bt = 1− β t.
By smoothness and Eq. (1), we have
f(θt+1)− f(θt) ≤ ∇>t (θt+1 − θt) + 1
2 M ‖θt+1 − θt‖2
= −ηt b2t bt∇>t vt+1 + 1 2 M η2t b2t ‖vt+1‖2
= ηt b2t
( ‖bt∇t − vt+1‖2 − ‖bt∇t‖2 − ‖vt+1‖2 ) + 1
2 M η2t b2t ‖vt+1‖2
= ηt b2t ‖bt∇t − vt+1‖2︸ ︷︷ ︸
U1(t)
−ηt ‖∇t‖2 − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2 , (27)
where only the U1(t) is non-negative. Specifically, U1(t) describes the difference between current gradient and the average. We can expand vt+1 to get an upper bound:
U1(t) = ‖bt∇t − vt+1‖2 = ∥∥∥(1− β)∑t
i=1 βt−i∇t − (1− β) ∑t i=1 βt−igi ∥∥∥2 = (1− β)2 ∥∥∥∑t i=1 βt−i(∇t − gi) ∥∥∥2
= (1− β)2 ∥∥∥∑t
i=1 βt−i(∇t −∇i) + ∑t i=1 βt−i(∇i − gi) ∥∥∥2
≤ 2(1− β)2 [∥∥∥∑t
i=1 βt−i(∇t −∇i) ∥∥∥2 + ∥∥∥∑t i=1 βt−i(∇i − gi) ∥∥∥2]
≤ 2(1− β) bt∑ti=1 βt−i ‖∇t −∇i‖2︸ ︷︷ ︸ U2(t) (gradient variance) +(1− β) ∥∥∥∑t i=1 βt−iνi ∥∥∥2︸ ︷︷ ︸ noise variance
where we use ‖x+ y‖2 ≤ (‖x‖+ ‖y‖)2 ≤ 2(‖x‖2 + ‖y‖2). The last inequality can be proved by Cauchy-Schwartz inequality for each coordinate.
We plug the U1(t) into Eq. (27) and use the PL condition to get
f(θt+1)− f(θt) ≤ ηt b2t U1(t)− ηt ‖∇t‖2 − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2
≤ −ηt ‖∇t‖2 + ηt b2t
2(1− β) [ btU2(t) + (1− β) ∥∥∥∑t i=1 βt−iνi ∥∥∥2] − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2
≤ −2µηt(f(θt)− f(θ∗)) + 2(1− β)ηt
bt U2(t) + 2(1− β)2ηt b2t ∥∥∥∑t i=1 βt−iνi ∥∥∥2 − ηt b2t (1− 1 2 Mηt) ‖vt+1‖2 .
Rearranging terms and taking expectation to show
E[f(θt+1)]− f(θ∗) ≤ γ(E[f(θt)]− f(θ∗)) + 2(1− β)2ηt
b2t
∑t i=1 βt−iE ‖νi‖2
+ 2(1− β)ηt
bt E[U2(t)]− ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2
= γ(E[f(θt)]− f(θ∗)) + 2(1− β)2ηt
b2t
∑t i=1 β2(t−i) C2Dσ2t N2
+ 2(1− β)ηt
bt E[U2(t)]− ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2
where γ = 1− η0/κ = 1− 2µηt. The recursive inequality implies
E[f(θT+1)]− f(θ∗) ≤ γT (f(θ1)− f(θ∗)) + T∑ t=1 γT−t 2(1− β)2ηt b2t ∑t i=1 β2(t−i) C2Dσ2t N2
+ T∑ t=1 γT−t 2(1− β)ηt bt E[U2(t)]− T∑ t=1 γT−t ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2
= ( γT + 2η0αR ∑T t=1 γT−t (1− β)2
b2t
∑t i=1
β2(t−i)σ2t︸ ︷︷ ︸ U3
) (f(θ1)− f(θ∗))
+ T∑ t=1 γT−t 2(1− β)ηt bt E[U2(t)]− T∑ t=1 γT−t ηt b2t (1− 1 2 Mηt)E ‖vt+1‖2︸ ︷︷ ︸
U4(t)
.
where we utilize α = DC 2
2MN2R 1 f(θ1)−f(θ∗) and ηt = η0 2M .
By Lemma B.5, we have T∑ t=1 γT−t 2(1− β)ηt bt U2(t) ≤
η30βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−i ‖vi+1‖2 .
Thus, by 1bt ≥ 1,
U4(t) ≤ η30βγ 2M(1− β)3(γ − β)2 T−1∑ i=1 γT−iE ‖vi+1‖2 − η0 2M (1− η0 4 ) T∑ t=1 γT−tE ‖vt+ | 1. What is the focus of the paper regarding gradient descent and privacy constraints?
2. What are the strengths of the proposed approach, particularly in terms of its novel analysis and extension to momentum-based gradient descent?
3. What are the weaknesses of the paper regarding its writing clarity, experimental section, and notation choices?
4. Do you have any questions regarding the utility bound for non-private GD, the difference between v_t and g_t, and the definition of the ERUB?
5. How does the paper improve the utility analysis of a uniform schedule, and what is the improvement factor compared to Wang's bound?
6. What is the takeaway from the discussion of robustness, and how does it relate to noise in samples or labels leading to high loss curvature? | Review | Review
Summary
Gradient Descent and related variants are the defacto standard algorithms for optimizing empirical risk functions. Since published models have been shown in the literature to leak private information, the problem of performing gradient descent under privacy constraints is an important one. Given a fixed privacy budget R, private gradient descent adds noise to the gradients at each round, ensuring overall privacy budget R by composition. However, this still leaves the question of what privacy schedule is best open, since any schedule whose privacy budgets sum up to R achieves the privacy objective. In this paper they compute the optimal privacy schedule from an accuracy perspective via a novel analysis of the convergence of private gradient descent on loss functions that satisfy the PL condition, which turns out to be exponentially decaying noise (increasing privacy budget for each round). These results extend to a privatized variant of the momentum-based gradient descent algorithm, although the dynamic privacy schedule has less improvement there. Experimental results show that dynamic privacy schedules lead to enhanced accuracy even absent convexity.
Pros
improves over previous soa analysis of private gradient descent (wang 2017) by
l
o
g
N
2
factor
Novel analysis for an important practical problem that also yields intuition via the notion of noise influence / propagation
Cons
Grammatical Errors throughout: e.g. sentence fragment page 6 “because our bound saved a factor of log N and is thus tighter, as compared to Wang”
Overall the clarity of the writing and explanations, both in terms of prefacing the technical content, and readability could be improved.
Experimental section needs more detailed fleshing out: 1) how does the middle pane show the trend of variance of influence? 2) how are we modeling influence for DNN and how are we then computing the optimal dynamic schedule 3) does plot #1 perform a sensitivity analysis on the ERR via retraining to estimate
q
t
empirically? Explain this.
Comments
why does the utility bound for non-private GD in table 1 have a factor of
R
(privacy budget) in the denominator?
What is the difference between
v
t
and
g
t
?
v
t
=
g
t
it says on the bottom of page 3
v
t
and
ν
t
look very similar, choose more distinct notation
write out the definition of the ERUB in the main text on page 4
4.11 uses the improved analysis to sharpen the utility analysis of a uniform schedule by choosing an optimal value of T. Please contrast the Wang bound more clearly with (5) and state the improvement factor. We can read this from the analysis but it should be stated more prominently.
Discussion of robustness on page 6 is a little imprecisely worded. Is the takeaway that when the condition number is very high relative to the sample size, the dynamic schedule ERUB doesn’t blow up? Since we are calling this robustness, is it obvious that noise in the samples or labels leads to high loss curvature? |
ICLR | Title
Designing Neural Network Architectures using Reinforcement Learning
Abstract
At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.
1 INTRODUCTION
Deep convolutional neural networks (CNNs) have seen great success in the past few years on a variety of machine learning problems (LeCun et al., 2015). A typical CNN architecture consists of several convolution, pooling, and fully connected layers. While constructing a CNN, a network designer has to make numerous design choices: the number of layers of each type, the ordering of layers, and the hyperparameters for each type of layer, e.g., the receptive field size, stride, and number of receptive fields for a convolution layer. The number of possible choices makes the design space of CNN architectures extremely large and hence, infeasible for an exhaustive manual search. While there has been some work (Pinto et al., 2009; Bergstra et al., 2013; Domhan et al., 2015) on automated or computer-aided neural network design, new CNN architectures or network design elements are still primarily developed by researchers using new theoretical insights or intuition gained from experimentation.
In this paper, we seek to automate the process of CNN architecture selection through a metamodeling procedure based on reinforcement learning. We construct a novel Q-learning agent whose goal is to discover CNN architectures that perform well on a given machine learning task with no human intervention. The learning agent is given the task of sequentially picking layers of a CNN model. By discretizing and limiting the layer parameters to choose from, the agent is left with a finite but large space of model architectures to search from. The agent learns through random exploration and slowly begins to exploit its findings to select higher performing models using the - greedy strategy (Mnih et al., 2015). The agent receives the validation accuracy on the given machine learning task as the reward for selecting an architecture. We expedite the learning process through repeated memory sampling using experience replay (Lin, 1993). We refer to this Q-learning based meta-modeling method as MetaQNN, which is summarized in Figure 1.1
We conduct experiments with a space of model architectures consisting of only standard convolution, pooling, and fully connected layers using three standard image classification datasets: CIFAR-10,
1For more information, model files, and code, please visit https://bowenbaker.github.io/metaqnn/
SVHN, and MNIST. The learning agent discovers CNN architectures that beat all existing networks designed only with the same layer types (e.g., Springenberg et al. (2014); Srivastava et al. (2015)). In addition, their performance is competitive against network designs that include complex layer types and training procedures (e.g., Clevert et al. (2015); Lee et al. (2016)). Finally, the MetaQNN selected models comfortably outperform previous automated network design methods (Stanley & Miikkulainen, 2002; Bergstra et al., 2013). The top network designs discovered by the agent on one dataset are also competitive when trained on other datasets, indicating that they are suited for transfer learning tasks. Moreover, we can generate not just one, but several varied, well-performing network designs, which can be ensembled to further boost the prediction performance.
2 RELATED WORK
Designing neural network architectures: Research on automating neural network design goes back to the 1980s when genetic algorithm-based approaches were proposed to find both architectures and weights (Schaffer et al., 1992). However, to the best of our knowledge, networks designed with genetic algorithms, such as those generated with the NEAT algorithm (Stanley & Miikkulainen, 2002), have been unable to match the performance of hand-crafted networks on standard benchmarks (Verbancsics & Harguess, 2013). Other biologically inspired ideas have also been explored; motivated by screening methods in genetics, Pinto et al. (2009) proposed a high-throughput network selection approach where they randomly sample thousands of architectures and choose promising ones for further training. In recent work, Saxena & Verbeek (2016) propose to sidestep the architecture selection process through densely connected networks of layers, which come closer to the performance of hand-crafted networks.
Bayesian optimization has also been used (Shahriari et al., 2016) for automatic selection of network architectures (Bergstra et al., 2013; Domhan et al., 2015) and hyperparameters (Snoek et al., 2012; Swersky et al., 2013). Notably, Bergstra et al. (2013) proposed a meta-modeling approach based on Tree of Parzen Estimators (TPE) (Bergstra et al., 2011) to choose both the type of layers and hyperparameters of feed-forward networks; however, they fail to match the performance of handcrafted networks.
Reinforcement Learning: Recently there has been much work at the intersection of reinforcement learning and deep learning. For instance, methods using CNNs to approximate theQ-learning utility function (Watkins, 1989) have been successful in game-playing agents (Mnih et al., 2015; Silver et al., 2016) and robotic control (Lillicrap et al., 2015; Levine et al., 2016). These methods rely on phases of exploration, where the agent tries to learn about its environment through sampling, and exploitation, where the agent uses what it learned about the environment to find better paths. In traditional reinforcement learning settings, over-exploration can lead to slow convergence times, yet over-exploitation can lead to convergence to local minima (Kaelbling et al., 1996). However, in the case of large or continuous state spaces, the -greedy strategy of learning has been empirically shown to converge (Vermorel & Mohri, 2005). Finally, when the state space is large or exploration is costly,
the experience replay technique (Lin, 1993) has proved useful in experimental settings (Adam et al., 2012; Mnih et al., 2015). We incorporate these techniques—Q-learning, the -greedy strategy and experience replay—in our algorithm design.
3 BACKGROUND
Our method relies on Q-learning, a type of reinforcement learning. We now summarize the theoretical formulation of Q-learning, as adopted to our problem. Consider the task of teaching an agent to find optimal paths as a Markov Decision Process (MDP) in a finite-horizon environment. Constraining the environment to be finite-horizon ensures that the agent will deterministically terminate in a finite number of time steps. In addition, we restrict the environment to have a discrete and finite state space S as well as action space U . For any state si ∈ S , there is a finite set of actions, U(si) ⊆ U , that the agent can choose from. In an environment with stochastic transitions, an agent in state si taking some action u ∈ U(si) will transition to state sj with probability ps′|s,u(sj |si, u), which may be unknown to the agent. At each time step t, the agent is given a reward rt, dependent on the transition from state s to s′ and action u. rt may also be stochastic according to a distribution pr|s′,s,u. The agent’s goal is to maximize the total expected reward over all possible trajectories, i.e., maxTi∈T RTi , where the total expected reward for a trajectory Ti is
RTi = ∑ (s,u,s′)∈Ti Er|s,u,s′ [r|s, u, s ′]. (1)
Though we limit the agent to a finite state and action space, there are still a combinatorially large number of trajectories, which motivates the use of reinforcement learning. We define the maximization problem recursively in terms of subproblems as follows. For any state si ∈ S and subsequent action u ∈ U(si), we define the maximum total expected reward to be Q∗(si, u). Q∗(·) is known as the action-value function and individual Q∗(si, u) are know as Q-values. The recursive maximization equation, which is known as Bellman’s Equation, can be written as
Q∗(si, u) = Esj |si,u [ Er|si,u,sj [r|si, u, sj ] + γmaxu′∈U(sj)Q∗(sj , u′) ] . (2)
In many cases, it is impossible to analytically solve Bellman’s Equation (Bertsekas, 2015), but it can be formulated as an iterative update
Qt+1(si, u) = (1− α)Qt(si, u) + α [ rt + γmaxu′∈U(sj)Qt(sj , u ′) ] . (3)
Equation 3 is the simplest form of Q-learning proposed by Watkins (1989). For well formulated problems, limt→∞Qt(s, u) = Q∗(s, u), as long as each transition is sampled infinitely many times (Bertsekas, 2015). The update equation has two parameters: (i) α is a Q-learning rate which determines the weight given to new information over old information, and (ii) γ is the discount factor which determines the weight given to short-term rewards over future rewards. The Q-learning algorithm is model-free, in that the learning agent can solve the task without ever explicitly constructing an estimate of environmental dynamics. In addition, Q-learning is off policy, meaning it can learn about optimal policies while exploring via a non-optimal behavioral distribution, i.e. the distribution by which the agent explores its environment.
We choose the behavior distribution using an -greedy strategy (Mnih et al., 2015). With this strategy, a random action is taken with probability and the greedy action, maxu∈U(si)Qt(si, u), is chosen with probability 1− . We anneal from 1→ 0 such that the agent begins in an exploration phase and slowly starts moving towards the exploitation phase. In addition, when the exploration cost is large (which is true for our problem setting), it is beneficial to use the experience replay technique for faster convergence (Lin, 1992). In experience replay, the learning agent is provided with a memory of its past explored paths and rewards. At a given interval, the agent samples from the memory and updates its Q-values via Equation 3.
4 DESIGNING NEURAL NETWORK ARCHITECTURES WITH Q-LEARNING
We consider the task of training a learning agent to sequentially choose neural network layers. Figure 2 shows feasible state and action spaces (a) and a potential trajectory the agent may take along with the CNN architecture defined by this trajectory (b). We model the layer selection process as a Markov Decision Process with the assumption that a well-performing layer in one network should
also perform well in another network. We make this assumption based on the hierarchical nature of the feature representations learned by neural networks with many hidden layers (LeCun et al., 2015). The agent sequentially selects layers via the -greedy strategy until it reaches a termination state. The CNN architecture defined by the agent’s path is trained on the chosen learning problem, and the agent is given a reward equal to the validation accuracy. The validation accuracy and architecture description are stored in a replay memory, and experiences are sampled periodically from the replay memory to update Q-values via Equation 3. The agent follows an schedule which determines its shift from exploration to exploitation.
Our method requires three main design choices: (i) reducing CNN layer definitions to simple state tuples, (ii) defining a set of actions the agent may take, i.e., the set of layers the agent may pick next given its current state, and (iii) balancing the size of the state-action space—and correspondingly, the model capacity—with the amount of exploration needed by the agent to converge. We now describe the design choices and the learning process in detail.
4.1 THE STATE SPACE
Each state is defined as a tuple of all relevant layer parameters. We allow five different types of layers: convolution (C), pooling (P), fully connected (FC), global average pooling (GAP), and softmax (SM), though the general method is not limited to this set. Table 1 shows the relevant parameters for each layer type and also the discretization we chose for each parameter. Each layer has a parameter layer depth (shown as Layer 1, 2, ... in Figure 2). Adding layer depth to the state space allows us to constrict the action space such that the state-action graph is directed and acyclic (DAG) and also allows us to specify a maximum number of layers the agent may select before terminating.
Each layer type also has a parameter called representation size (R-size). Convolutional nets progressively compress the representation of the original signal through pooling and convolution. The presence of these layers in our state space may lead the agent on a trajectory where the intermediate signal representation gets reduced to a size that is too small for further processing. For example, five 2× 2 pooling layers each with stride 2 will reduce an image of initial size 32× 32 to size 1× 1. At this stage, further pooling, or convolution with receptive field size greater than 1, would be meaningless and degenerate. To avoid such scenarios, we add the R-size parameter to the state tuple s, which allows us to restrict actions from states with R-size n to those that have a receptive field size less than or equal to n. To further constrict the state space, we chose to bin the representation sizes into three discrete buckets. However, binning adds uncertainty to the state transitions: depending on the true underlying representation size, a pooling layer may or may not change the R-size bin. As a result, the action of pooling can lead to two different states, which we model as stochasticity in state transitions. Please see Figure A1 in appendix for an illustrated example.
4.2 THE ACTION SPACE
We restrict the agent from taking certain actions to both limit the state-action space and make learning tractable. First, we allow the agent to terminate a path at any point, i.e. it may choose a termination state from any non-termination state. In addition, we only allow transitions for a state with layer depth i to a state with layer depth i + 1, which ensures that there are no loops in the graph. This constraint ensures that the state-action graph is always a DAG. Any state at the maximum layer depth, as prescribed in Table 1, may only transition to a termination layer.
Next, we limit the number of fully connected (FC) layers to be at maximum two, because a large number of FC layers can lead to too may learnable parameters. The agent at a state with type FC may transition to another state with type FC if and only if the number of consecutive FC states is less than the maximum allowed. Furthermore, a state s of type FC with number of neurons d may only transition to either a termination state or a state s′ of type FC with number of neurons d′ ≤ d. An agent at a state of type convolution (C) may transition to a state with any other layer type. An agent at a state with layer type pooling (P) may transition to a state with any other layer type other than another P state because consecutive pooling layers are equivalent to a single, larger pooling layer which could lie outside of our chosen state space. Furthermore, only states with representation size in bins (8, 4] and (4, 1] may transition to an FC layer, which ensures that the number of weights does not become unreasonably huge. Note that a majority of these constraints are in place to enable faster convergence on our limited hardware (see Section 5) and not a limitation of the method in itself.
4.3 Q-LEARNING TRAINING PROCEDURE
For the iterativeQ-learning updates (Equation 3), we set theQ-learning rate (α) to 0.01. In addition, we set the discount factor (γ) to 1 to not over-prioritize short-term rewards. We decrease from 1.0 to 0.1 in steps, where the step-size is defined by the number of unique models trained (Table 2). At = 1.0, the agent samples CNN architecture with a random walk along a uniformly weighted Markov chain. Every topology sampled by the agent is trained using the procedure described in Section 5, and the prediction performance of this network topology on the validation set is recorded. We train a larger number of models at = 1.0 as compared to other values of to ensure that the agent has adequate time to explore before it begins to exploit. We stop the agent at = 0.1 (and not at = 0) to obtain a stochastic final policy, which generates perturbations of the global minimum.2 Ideally, we want to identify several well-performing model topologies, which can then be ensembled to improve prediction performance.
During the entire training process (starting at = 1.0), we maintain a replay dictionary which stores (i) the network topology and (ii) prediction performance on a validation set, for all of the sampled
2 = 0 indicates a completely deterministic policy. Because we would like to generate several good models for ensembling and analysis, we stop at = 0.1, which represents a stochastic final policy.
models. If a model that has already been trained is re-sampled, it is not re-trained, but instead the previously found validation accuracy is presented to the agent. After each model is sampled and trained, the agent randomly samples 100 models from the replay dictionary and applies the Q-value update defined in Equation 3 for all transitions in each sampled sequence. The Q-value update is applied to the transitions in temporally reversed order, which has been shown to speed up Q-values convergence (Lin, 1993).
5 EXPERIMENT DETAILS
During the model exploration phase, we trained each network topology with a quick and aggressive training scheme. For each experiment, we created a validation set by randomly taking 5,000 samples from the training set such that the resulting class distributions were unchanged. For every network, a dropout layer was added after every two layers. The ith dropout layer, out of a total n dropout layers, had a dropout probability of i2n . Each model was trained for a total of 20 epochs with the Adam optimizer (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.999, ε = 10−8. The batch size was set to 128, and the initial learning rate was set to 0.001. If the model failed to perform better than a random predictor after the first epoch, we reduced the learning rate by a factor of 0.4 and restarted training, for a maximum of 5 restarts. For models that started learning (i.e., performed better than a random predictor), we reduced the learning rate by a factor of 0.2 every 5 epochs. All weights were initialized with Xavier initialization (Glorot & Bengio, 2010). Our experiments using Caffe (Jia et al., 2014) took 8-10 days to complete for each dataset with a hardware setup consisting of 10 NVIDIA GPUs.
After the agent completed the schedule (Table 2), we selected the top ten models that were found over the course of exploration. These models were then finetuned using a much longer training schedule, and only the top five were used for ensembling. We now provide details of the datasets and the finetuning process.
The Street View House Numbers (SVHN) dataset has 10 classes with a total of 73,257 samples in the original training set, 26,032 samples in the test set, and 531,131 additional samples in the extended training set. During the exploration phase, we only trained with the original training set, using 5,000 random samples as validation. We finetuned the top ten models with the original plus extended training set, by creating preprocessed training and validation sets as described by Lee et al. (2016). Our final learning rate schedule after tuning on validation set was 0.025 for 5 epochs, 0.0125 for 5 epochs, 0.0001 for 20 epochs, and 0.00001 for 10 epochs.
CIFAR-10, the 10 class tiny image dataset, has 50,000 training samples and 10,000 testing samples. During the exploration phase, we took 5,000 random samples from the training set for validation. The maximum layer depth was increased to 18. After the experiment completed, we used the same validation set to tune hyperparameters, resulting in a final training scheme which we ran on the entire training set. In the final training scheme, we set a learning rate of 0.025 for 40 epochs, 0.0125 for 40 epochs, 0.0001 for 160 epochs, and 0.00001 for 60 epochs, with all other parameters unchanged. During this phase, we preprocess using global contrast normalization and use moderate data augmentation, which consists of random mirroring and random translation by up to 5 pixels.
MNIST, the 10 class handwritten digits dataset, has 60,000 training samples and 10,000 testing samples. We preprocessed each image with global mean subtraction. In the final training scheme, we trained each model for 40 epochs and decreased learning rate every 5 epochs by a factor of 0.2. For further tuning details please see Appendix C.
6 RESULTS
Model Selection Analysis: From Q-learning principles, we expect the learning agent to improve in its ability to pick network topologies as reduces and the agent enters the exploitation phase. In
Figure 3, we plot the rolling mean of prediction accuracy over 100 models and the mean accuracy of models sampled at different values, for the CIFAR-10 and SVHN experiments. The plots show that, while the prediction accuracy remains flat during the exploration phase ( = 1) as expected, the agent consistently improves in its ability to pick better-performing models as reduces from 1 to 0.1. For example, the mean accuracy of models in the SVHN experiment increases from 52.25% at = 1 to 88.02% at = 0.1. Furthermore, we demonstrate the stability of the Q-learning procedure with 10 independent runs on a subset of the SVHN dataset in Section D.1 of the Appendix. Additional analysis of Q-learning results can be found in Section D.2.
The top models selected by the Q-learning agent vary in the number of parameters but all demonstrate high performance (see Appendix Tables 1-3). For example, the number of parameters for the top five CIFAR-10 models range from 11.26 million to 1.10 million, with only a 2.32% decrease in test error. We find design motifs common to the top hand-crafted network architectures as well. For example, the agent often chooses a layer of type C(N, 1, 1) as the first layer in the network. These layers generate N learnable linear transformations of the input data, which is similar in spirit to preprocessing of input data from RGB to a different color spaces such as YUV, as found in prior work (Sermanet et al., 2012; 2013).
Prediction Performance: We compare the prediction performance of the MetaQNN networks discovered by theQ-learning agent with state-of-the-art methods on three datasets. We report the accuracy of our best model, along with an ensemble of top five models. First, we compare MetaQNN with six existing architectures that are designed with standard convolution, pooling, and fully-connected layers alone, similar to our designs. As seen in Table 3, our top model alone, as well as the committee ensemble of five models, outperforms all similar models. Next, we compare our results with six top networks overall, which contain complex layer types and design ideas, including generalized pooling functions, residual connections, and recurrent modules. Our results are competitive with these methods as well (Table 4). Finally, our method outperforms existing automated network de-
sign methods. MetaQNN obtains an error of 6.92% as compared to 21.2% reported by Bergstra et al. (2011) on CIFAR-10; and it obtains an error of 0.32% as compared to 7.9% reported by Verbancsics & Harguess (2013) on MNIST.
The difference in validation error between the top 10 models for MNIST was very small, so we also created an ensemble with all 10 models. This ensemble achieved a test error of 0.28%—which beats the current state-of-the-art on MNIST without data augmentation.
The best CIFAR-10 model performs 1-2% better than the four next best models, which is why the ensemble accuracy is lower than the best model’s accuracy. We posit that the CIFAR-10 MetaQNN did not have adequate exploration time given the larger state space compared to that of the SVHN experiment, causing it to not find more models with performance similar to the best model. Furthermore, the coarse training scheme could have been not as well suited for CIFAR-10 as it was for SVHN, causing some models to under perform.
Transfer Learning Ability: Network designs such as VGGnet (Simonyan & Zisserman, 2014) can be adopted to solve a variety of computer vision problems. To check if the MetaQNN networks provide similar transfer learning ability, we use the best MetaQNN model on the CIFAR-10 dataset for training other computer vision tasks. The model performs well (Table 5) both when training from random initializations, and finetuning from existing weights.
7 CONCLUDING REMARKS
Neural networks are being used in an increasingly wide variety of domains, which calls for scalable solutions to produce problem-specific model architectures. We take a step towards this goal and show that a meta-modeling approach using reinforcement learning is able to generate tailored CNN designs for different image classification tasks. Our MetaQNN networks outperform previous metamodeling methods as well as hand-crafted networks which use the same types of layers.
While we report results for image classification problems, our method could be applied to different problem settings, including supervised (e.g., classification, regression) and unsupervised (e.g., autoencoders). The MetaQNN method could also aid constraint-based network design, by optimizing parameters such as size, speed, and accuracy. For instance, one could add a threshold in the state-action space barring the agent from creating models larger than the desired limit. In addition,
∗Results in this column obtained with the top MetaQNN architecture for CIFAR-10, trained from random initialization with CIFAR-100 data.
one could modify the reward function to penalize large models for constraining memory or penalize slow forward passes to incentivize quick inference.
There are several future avenues for research in reinforcement learning-driven network design as well. In our current implementation, we use the same set of hyperparameters to train all network topologies during the Q-learning phase and further finetune the hyperparameters for top models selected by the MetaQNN agent. However, our approach could be combined with hyperparameter optimization methods to further automate the network design process. Moreover, we constrict the state-action space using coarse, discrete bins to accelerate convergence. It would be possible to move to larger state-action spaces using methods for Q-function approximation (Bertsekas, 2015; Mnih et al., 2015).
ACKNOWLEDGMENTS
We thank Peter Downs for creating the project website and contributing to illustrations. We acknowledge Center for Bits and Atoms at MIT for their help with computing resources. Finally, we thank members of Camera Culture group at MIT Media Lab for their help and support.
A ALGORITHM
We first describe the main components of the MetaQNN algorithm. Algorithm 1 shows the main loop, where the parameter M would determine how many models to run for a given and the parameter K would determine how many times to sample the replay database to update Q-values on each iteration. The function TRAIN refers to training the specified network and returns a validation accuracy. Algorithm 2 details the method for sampling a new network using the -greedy strategy, where we assume we have a function TRANSITION that returns the next state given a state and action. Finally, Algorithm 3 implements theQ-value update detailed in Equation 3, with discounting factor set to 1, for an entire state sequence in temporally reversed order.
Algorithm 1 Q-learning For CNN Topologies Initialize:
replay memory← [ ] Q← {(s, u) ∀s ∈ S, u ∈ U(s) : 0.5}
for episode = 1 to M do S, U ← SAMPLE NEW NETWORK( , Q) accuracy← TRAIN(S) replay memory.append((S, U, accuracy)) for memory = 1 to K do
SSAMPLE , USAMPLE , accuracySAMPLE ← Uniform{replay memory} Q← UPDATE Q VALUES(Q, SSAMPLE , USAMPLE , accuracySAMPLE)
end for end for
Algorithm 2 SAMPLE NEW NETWORK( , Q) Initialize:
state sequence S = [sSTART] action sequence U = [ ]
while U [−1] 6= terminate do α ∼ Uniform[0, 1) if α > then
u = argmaxu∈U(S[−1])Q[(S[−1], u)] s′ = TRANSITION(S[−1], u)
else u ∼ Uniform{U(S[−1])} s′ = TRANSITION(S[−1], u) end if U.append(u) if u != terminate then
S.append(s′) end if
end while return S, U
Algorithm 3 UPDATE Q VALUES(Q, S, U , accuracy) Q[S[−1], U [−1]] = (1− α)Q[S[−1], U [−1]] + α · accuracy for i = length(S)− 2 to 0 do
Q[S[i], U [i]] = (1− α)Q[S[i], U [i]] + αmaxu∈U(S[i+1])Q[S[i+ 1], u] end for return Q
B REPRESENTATION SIZE BINNING
As mentioned in Section 4.1 of the main text, we introduce a parameter called representation size to prohibit the agent from taking actions that can reduce the intermediate signal representation to a size that is too small for further processing. However, this process leads to uncertainties in state transitions, as illustrated in Figure A1, which is handled by the standard Q-learning formulation.
P(2,2)
R-size: 18 R-size bin: 1
R-size: 9 R-size bin: 1
(a)
P(2,2)
R-size: 7 R-size bin: 2
R-size: 14 R-size bin: 1
(b)
States Actions
p 1 2 p
R-size bin: 1
R-size bin: 1 R-size bin: 2
P(2,2)
(c)
Figure A1: Representation size binning: In this figure, we show three example state transitions. The true representation size (R-size) parameter is included in the figure to show the true underlying state. Assuming there are two R-size bins, R-size Bin1: [8,∞) and R-size Bin2: (0, 7], Figure A1a shows the case where the initial state is in R-size Bin1 and true representation size is 18. After the agent chooses to pool with a 2×2 filter with stride 2, the true representation size reduces to 9 but the R-size bin does not change. In Figure A1b, the same 2 × 2 pooling layer with stride 2 reduces the actual representation size of 14 to 7, but the bin changes to R-size Bin2. Therefore, in figures A1a and A1b, the agent ends up in different final states, despite originating in the same initial state and choosing the same action. Figure A1c shows that in our state-action space, when the agent takes an action that reduces the representation size, it will have uncertainty in which state it will transition to.
C MNIST EXPERIMENT
We noticed that the final MNIST models were prone to overfitting, so we increased dropout and did a small grid search for the weight regularization parameter. For both tuning and final training, we warmed the model with the learned weights from after the first epoch of initial training. The final models and solvers can be found on our project website https://bowenbaker.github.io/metaqnn/ . Figure A2 shows the Q-Learning performance for the MNIST experiment.
D FURTHER ANALYSIS OF Q-LEARNING
Figure 3 of the main text and Figure A2 show that as the agent begins to exploit, it improves in architecture selection. It is also informative to look at the distribution of models chosen at each . Figure A4 gives further insight into the performance achieved at each for both experiments.
D.1 Q-LEARNING STABILITY
Because the Q-learning agent explores via a random or semi-random distribution, it is natural to ask whether the agent can consistently improve architecture performance. While the success of the three independent experiments described in the main text allude to stability, here we present further evidence. We conduct 10 independent runs of the Q-learning procedure on 10% of the SVHN dataset (which corresponds to ∼7,000 training examples). We use a smaller dataset to reduce the computation time of each independent run to 10GPU-days, as opposed to the 100GPU-days it would take on the full dataset. As can be seen in Figure A3, the Q-learning procedure with the exploration schedule detailed in Table 2 is fairly stable. The standard deviation at = 1 is notably smaller than at other stages, which we attribute to the large difference in number of samples at each stage.
0 500 1000 1500 2000 2500 3000 3500 Iterations
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
A cc
u ra
cy
Epsilon = 1.0 .9.8.7 .6 .5 .4 .3 .2 .1
MNIST Q-Learning Performance
Average Accuracy Per Epsilon Rolling Mean Model Accuracy
Figure A2: MNIST Q-Learning Performance. The blue line shows a rolling mean of model accuracy versus iteration, where in each iteration of the algorithm the agent is sampling a model. Each bar (in light blue) marks the average accuracy over all models that were sampled during the exploration phase with the labeled . As decreases, the average accuracy goes up, demonstrating that the agent learns to select better-performing CNN architectures.
0.10.20.30.40.50.60.70.80.91.0 Epsilon
0.45
0.50
0.55
0.60
0.65
0.70
0.75
0.80
M ea
n Ac
cu ra
cy
Q-Learning Stability (Across 10 Runs)
(a)
0.10.20.30.40.50.60.70.80.91.0 Epsilon
0.45
0.50
0.55
0.60
0.65
0.70
0.75 0.80 M ea n Ac cu ra cy
Q-Learning Individual Runs
(b)
Figure A3: Figure A3a shows the mean model accuracy and standard deviation at each over 10 independent runs of the Q-learning procedure on 10% of the SVHN dataset. Figure A3b shows the mean model accuracy at each for each independent experiment. Despite some variance due to a randomized exploration strategy, each independent run successfully improves architecture performance.
Furthermore, the best model found during each run had remarkably similar performance with a mean accuracy of 88.25% and standard deviation of 0.58%, which shows that each run successfully found at least one very high performing model. Note that we did not use an extended training schedule to improve performance in this experiment.
D.2 Q-VALUE ANALYSIS
We now analyze the actualQ-values generated by the agent during the training process. The learning agent iteratively updates the Q-values of each path during the -greedy exploration. Each Q-value is initialized at 0.5. After the -schedule is complete, we can analyze the final Q-value associated with each path to gain insights into the layer selection process. In the left column of Figure A5, we plot the average Q-value for each layer type at different layer depths (for both SVHN and CIFAR10) datasets. Roughly speaking, a higher Q-value associated with a layer type indicates a higher probability that the agent will pick that layer type. In Figure A5, we observe that, while the average Q-value is higher for convolution and pooling layers at lower layer depths, the Q-values for fullyconnected and termination layers (softmax and global average pooling) increase as we go deeper into the network. This observation matches with traditional network designs.
We can also plot the averageQ-values associated with different layer parameters for further analysis. In the right column of Figure A5, we plot the averageQ-values for convolution layers with receptive
field sizes 1, 3, and 5 at different layer depths. The plots show that layers with receptive field size of 5 have a higher Q-value as compared to sizes 1 and 3 as we go deeper into the networks. This indicates that it might be beneficial to use larger receptive field sizes in deeper networks.
In summary, the Q-learning method enables us to perform analysis on the relative benefits of different design parameters of our state space, and possibly gain insights for new CNN designs.
E TOP TOPOLOGIES SELECTED BY ALGORITHM
In Tables A1 through A3, we present the top five model architectures selected with Q-learning for each dataset, along with their prediction error reported on the test set, and their total number of parameters. To download the Caffe solver and prototext files, please visit https://bowenbaker.github.io/metaqnn/ .
Model Architecture Test Error (%) # Params (106) [C(512,5,1), C(256,3,1), C(256,5,1), C(256,3,1), P(5,3), C(512,3,1), C(512,5,1), P(2,2), SM(10)] 6.92 11.18 [C(128,1,1), C(512,3,1), C(64,1,1), C(128,3,1), P(2,2), C(256,3,1), P(2,2), C(512,3,1), P(3,2), SM(10)] 8.78 2.17 [C(128,3,1), C(128,1,1), C(512,5,1), P(2,2), C(128,3,1), P(2,2), C(64,3,1), C(64,5,1), SM(10)] 8.88 2.42 [C(256,3,1), C(256,3,1), P(5,3), C(256,1,1), C(128,3,1), P(2,2), C(128,3,1), SM(10)] 9.24 1.10 [C(128,5,1), C(512,3,1), P(2,2), C(128,1,1), C(128,5,1), P(3,2), C(512,3,1), SM(10)] 11.63 1.66
Table A1: Top 5 model architectures: CIFAR-10.
Model Architecture Test Error (%) # Params (106) [C(128,3,1), P(2,2), C(64,5,1), C(512,5,1), C(256,3,1), C(512,3,1), P(2,2), C(512,3,1), C(256,5,1), C(256,3,1), C(128,5,1), C(64,3,1), SM(10)] 2.24 9.81 [C(128,1,1), C(256,5,1), C(128,5,1), P(2,2), C(256,5,1), C(256,1,1), C(256,3,1), C(256,3,1), C(256,5,1), C(512,5,1), C(256,3,1), C(128,3,1), SM(10)] 2.28 10.38 [C(128,5,1), C(128,3,1), C(64,5,1), P(5,3), C(128,3,1), C(512,5,1), C(256,5,1), C(128,5,1), C(128,5,1), C(128,3,1), SM(10)] 2.32 6.83 [C(128,1,1), C(256,5,1), C(128,5,1), C(256,3,1), C(256,5,1), P(2,2), C(128,1,1), C(512,3,1), C(256,5,1), P(2,2), C(64,5,1), C(64,1,1), SM(10)] 2.35 6.99 [C(128,1,1), C(256,5,1), C(128,5,1), C(256,5,1), C(256,5,1), C(256,1,1), P(3,2), C(128,1,1), C(256,5,1), C(512,5,1), C(256,3,1), C(128,3,1), SM(10)] 2.36 10.05
Table A2: Top 5 model architectures: SVHN. Note that we do not report the best accuracy on test set from the above models in Tables 3 and 4 from the main text. This is because the model that achieved 2.28% on the test set performed the best on the validation set.
Model Architecture Test Error (%) # Params (106) [C(64,1,1), C(256,3,1), P(2,2), C(512,3,1), C(256,1,1), P(5,3), C(256,3,1), C(512,3,1), FC(512), SM(10)] 0.35 5.59 [C(128,3,1), C(64,1,1), C(64,3,1), C(64,5,1), P(2,2), C(128,3,1), P(3,2), C(512,3,1), FC(512), FC(128), SM(10)] 0.38 7.43 [C(512,1,1), C(128,3,1), C(128,5,1), C(64,1,1), C(256,5,1), C(64,1,1), P(5,3), C(512,1,1), C(512,3,1), C(256,3,1), C(256,5,1), C(256,5,1), SM(10)] 0.40 8.28 [C(64,3,1), C(128,3,1), C(512,1,1), C(256,1,1), C(256,5,1), C(128,3,1), P(5,3), C(512,1,1), C(512,3,1), C(128,5,1), SM(10)] 0.41 6.27 [C(64,3,1), C(128,1,1), P(2,2), C(256,3,1), C(128,5,1), C(64,1,1), C(512,5,1), C(128,5,1), C(64,1,1), C(512,5,1), C(256,5,1), C(64,5,1), SM(10)] 0.43 8.10 [C(64,1,1), C(256,5,1), C(256,5,1), C(512,1,1), C(64,3,1), P(5,3), C(256,5,1), C(256,5,1), C(512,5,1), C(64,1,1), C(128,5,1), C(512,5,1), SM(10)] 0.44 9.67 [C(128,3,1), C(512,3,1), P(2,2), C(256,3,1), C(128,5,1), C(64,1,1), C(64,5,1), C(512,5,1), GAP(10), SM(10)] 0.44 3.52 [C(256,3,1), C(256,5,1), C(512,3,1), C(256,5,1), C(512,1,1), P(5,3), C(256,3,1), C(64,3,1), C(256,5,1), C(512,3,1), C(128,5,1), C(512,5,1), SM(10)] 0.46 12.42 [C(512,5,1), C(128,5,1), C(128,5,1), C(128,3,1), C(256,3,1), C(512,5,1), C(256,3,1), C(128,3,1), SM(10)] 0.55 7.25 [C(64,5,1), C(512,5,1), P(3,2), C(256,5,1), C(256,3,1), C(256,3,1), C(128,1,1), C(256,3,1), C(256,5,1), C(64,1,1), C(256,3,1), C(64,3,1), SM(10)] 0.56 7.55
Table A3: Top 10 model architectures: MNIST. We report the top 10 models for MNIST because we included all 10 in our final ensemble. Note that we do not report the best accuracy on test set from the above models in Tables 3 and 4 from the main text. This is because the model that achieved 0.44% on the test set performed the best on the validation set.
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Validation Accuracy
0
10
20
30
40
50
60
% M
o d e ls
Model Accuracy Distribution (SVHN)
epsilon
0.1 0.2 0.3 0.4 0.5
0.6 0.7 0.8 0.9 1.0
(a)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Validation Accuracy
0
10
20
30
40
50
60
% M
o d e ls
Model Accuracy Distribution (SVHN)
epsilon
0.1 1.0
(b)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Validation Accuracy
0
5
10
15
20
% M
o d e ls
Model Accuracy Distribution (CIFAR-10)
epsilon
0.1 0.2 0.3 0.4 0.5
0.6 0.7 0.8 0.9 1.0
(c)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Validation Accuracy
0
5
10
15 20 % M o d e ls
Model Accuracy Distribution (CIFAR-10)
epsilon
0.1 1.0
(d)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Validation Accuracy
0
20
40
60
80
100
% M
od el
s
Model Accuracy Distribution (MNIST)
epsilon 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
(e)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Validation Accuracy
0
20
40
60
80
100
% M
od el
s
Model Accuracy Distribution (MNIST)
epsilon 0.1 1.0
(f)
Figure A4: Accuracy Distribution versus : Figures A4a, A4c, and A4e show the accuracy distribution for each for the SVHN, CIFAR-10, and MNIST experiments, respectively. Figures A4b, A4d, and A4f show the accuracy distributions for the initial = 1 and the final = 0.1. One can see that the accuracy distribution becomes much more peaked in the high accuracy ranges at small for each experiment.
0 2 4 6 8 10 12 14 Layer Depth
0.0
0.2
0.4
0.6
0.8
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth (SVHN)
Convolution Fully Connected Pooling Global Average Pooling Softmax
(a)
0 2 4 6 8 10 12 Layer Depth
0.5
0.6
0.7
0.8
0.9
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth for Convolution Layers (SVHN)
Receptive Field Size 1 Receptive Field Size 3 Receptive Field Size 5
(b)
0 5 10 15 20 Layer Depth
0.0
0.2
0.4
0.6
0.8
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth (CIFAR10)
Convolution Fully Connected Pooling Global Average Pooling Softmax
(c)
0 2 4 6 8 10 12 14 16 18 Layer Depth
0.5
0.6
0.7
0.8
0.9 1.0 A ve ra ge Q -V al ue
Average Q-Value vs. Layer Depth for Convolution Layers (CIFAR10)
Receptive Field Size 1 Receptive Field Size 3 Receptive Field Size 5
(d)
0 2 4 6 8 10 12 14 Layer Depth
0.0
0.2
0.4
0.6
0.8
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth (MNIST)
Convolution Fully Connected Pooling Global Average Pooling Softmax
(e)
0 2 4 6 8 10 12 Layer Depth
0.5
0.6
0.7
0.8
0.9
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth for Convolution Layers (MNIST)
Receptive Field Size 1 Receptive Field Size 3 Receptive Field Size 5
(f)
Figure A5: Average Q-Value versus Layer Depth for different layer types are shown in the left column. Average Q-Value versus Layer Depth for different receptive field sizes of the convolution layer are shown in the right column. | 1. What is the focus of the paper, and what are its contributions to the field of neural network architecture design?
2. What are the strengths of the proposed approach, particularly in terms of its ability to produce promising results on various datasets?
3. What are the limitations of the paper, especially regarding the narrow scope of architecture design choices and the uncertainty about its applicability in larger state-space settings? | Review | Review
This paper introduces a reinforcement learning framework for designing a neural network architecture. For each time-step, the agent picks a new layer type with corresponding layer parameters (e.g., #filters). In order to reduce the size of state-action space, they used a small set of design choices.
Strengths:
- A novel approach for automatic design of neural network architectures.
- Shows quite promising results on several datasets (MNIST, CIFAR-10).
Weakness:
- Limited architecture design choices due to many prior assumptions (e.g., a set of possible number of convolution filters, at most 2 fully-connected layers, maximum depth, hard-coded dropout, etc.)
- The method is demonstrated in tabular Q-learning setting, but it is unclear whether the proposed method would work in a large state-action space.
Overall, this is an interesting and novel approach for neural network architecture design, and it seems to be worth publication despite some weaknesses. |
ICLR | Title
Designing Neural Network Architectures using Reinforcement Learning
Abstract
At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.
1 INTRODUCTION
Deep convolutional neural networks (CNNs) have seen great success in the past few years on a variety of machine learning problems (LeCun et al., 2015). A typical CNN architecture consists of several convolution, pooling, and fully connected layers. While constructing a CNN, a network designer has to make numerous design choices: the number of layers of each type, the ordering of layers, and the hyperparameters for each type of layer, e.g., the receptive field size, stride, and number of receptive fields for a convolution layer. The number of possible choices makes the design space of CNN architectures extremely large and hence, infeasible for an exhaustive manual search. While there has been some work (Pinto et al., 2009; Bergstra et al., 2013; Domhan et al., 2015) on automated or computer-aided neural network design, new CNN architectures or network design elements are still primarily developed by researchers using new theoretical insights or intuition gained from experimentation.
In this paper, we seek to automate the process of CNN architecture selection through a metamodeling procedure based on reinforcement learning. We construct a novel Q-learning agent whose goal is to discover CNN architectures that perform well on a given machine learning task with no human intervention. The learning agent is given the task of sequentially picking layers of a CNN model. By discretizing and limiting the layer parameters to choose from, the agent is left with a finite but large space of model architectures to search from. The agent learns through random exploration and slowly begins to exploit its findings to select higher performing models using the - greedy strategy (Mnih et al., 2015). The agent receives the validation accuracy on the given machine learning task as the reward for selecting an architecture. We expedite the learning process through repeated memory sampling using experience replay (Lin, 1993). We refer to this Q-learning based meta-modeling method as MetaQNN, which is summarized in Figure 1.1
We conduct experiments with a space of model architectures consisting of only standard convolution, pooling, and fully connected layers using three standard image classification datasets: CIFAR-10,
1For more information, model files, and code, please visit https://bowenbaker.github.io/metaqnn/
SVHN, and MNIST. The learning agent discovers CNN architectures that beat all existing networks designed only with the same layer types (e.g., Springenberg et al. (2014); Srivastava et al. (2015)). In addition, their performance is competitive against network designs that include complex layer types and training procedures (e.g., Clevert et al. (2015); Lee et al. (2016)). Finally, the MetaQNN selected models comfortably outperform previous automated network design methods (Stanley & Miikkulainen, 2002; Bergstra et al., 2013). The top network designs discovered by the agent on one dataset are also competitive when trained on other datasets, indicating that they are suited for transfer learning tasks. Moreover, we can generate not just one, but several varied, well-performing network designs, which can be ensembled to further boost the prediction performance.
2 RELATED WORK
Designing neural network architectures: Research on automating neural network design goes back to the 1980s when genetic algorithm-based approaches were proposed to find both architectures and weights (Schaffer et al., 1992). However, to the best of our knowledge, networks designed with genetic algorithms, such as those generated with the NEAT algorithm (Stanley & Miikkulainen, 2002), have been unable to match the performance of hand-crafted networks on standard benchmarks (Verbancsics & Harguess, 2013). Other biologically inspired ideas have also been explored; motivated by screening methods in genetics, Pinto et al. (2009) proposed a high-throughput network selection approach where they randomly sample thousands of architectures and choose promising ones for further training. In recent work, Saxena & Verbeek (2016) propose to sidestep the architecture selection process through densely connected networks of layers, which come closer to the performance of hand-crafted networks.
Bayesian optimization has also been used (Shahriari et al., 2016) for automatic selection of network architectures (Bergstra et al., 2013; Domhan et al., 2015) and hyperparameters (Snoek et al., 2012; Swersky et al., 2013). Notably, Bergstra et al. (2013) proposed a meta-modeling approach based on Tree of Parzen Estimators (TPE) (Bergstra et al., 2011) to choose both the type of layers and hyperparameters of feed-forward networks; however, they fail to match the performance of handcrafted networks.
Reinforcement Learning: Recently there has been much work at the intersection of reinforcement learning and deep learning. For instance, methods using CNNs to approximate theQ-learning utility function (Watkins, 1989) have been successful in game-playing agents (Mnih et al., 2015; Silver et al., 2016) and robotic control (Lillicrap et al., 2015; Levine et al., 2016). These methods rely on phases of exploration, where the agent tries to learn about its environment through sampling, and exploitation, where the agent uses what it learned about the environment to find better paths. In traditional reinforcement learning settings, over-exploration can lead to slow convergence times, yet over-exploitation can lead to convergence to local minima (Kaelbling et al., 1996). However, in the case of large or continuous state spaces, the -greedy strategy of learning has been empirically shown to converge (Vermorel & Mohri, 2005). Finally, when the state space is large or exploration is costly,
the experience replay technique (Lin, 1993) has proved useful in experimental settings (Adam et al., 2012; Mnih et al., 2015). We incorporate these techniques—Q-learning, the -greedy strategy and experience replay—in our algorithm design.
3 BACKGROUND
Our method relies on Q-learning, a type of reinforcement learning. We now summarize the theoretical formulation of Q-learning, as adopted to our problem. Consider the task of teaching an agent to find optimal paths as a Markov Decision Process (MDP) in a finite-horizon environment. Constraining the environment to be finite-horizon ensures that the agent will deterministically terminate in a finite number of time steps. In addition, we restrict the environment to have a discrete and finite state space S as well as action space U . For any state si ∈ S , there is a finite set of actions, U(si) ⊆ U , that the agent can choose from. In an environment with stochastic transitions, an agent in state si taking some action u ∈ U(si) will transition to state sj with probability ps′|s,u(sj |si, u), which may be unknown to the agent. At each time step t, the agent is given a reward rt, dependent on the transition from state s to s′ and action u. rt may also be stochastic according to a distribution pr|s′,s,u. The agent’s goal is to maximize the total expected reward over all possible trajectories, i.e., maxTi∈T RTi , where the total expected reward for a trajectory Ti is
RTi = ∑ (s,u,s′)∈Ti Er|s,u,s′ [r|s, u, s ′]. (1)
Though we limit the agent to a finite state and action space, there are still a combinatorially large number of trajectories, which motivates the use of reinforcement learning. We define the maximization problem recursively in terms of subproblems as follows. For any state si ∈ S and subsequent action u ∈ U(si), we define the maximum total expected reward to be Q∗(si, u). Q∗(·) is known as the action-value function and individual Q∗(si, u) are know as Q-values. The recursive maximization equation, which is known as Bellman’s Equation, can be written as
Q∗(si, u) = Esj |si,u [ Er|si,u,sj [r|si, u, sj ] + γmaxu′∈U(sj)Q∗(sj , u′) ] . (2)
In many cases, it is impossible to analytically solve Bellman’s Equation (Bertsekas, 2015), but it can be formulated as an iterative update
Qt+1(si, u) = (1− α)Qt(si, u) + α [ rt + γmaxu′∈U(sj)Qt(sj , u ′) ] . (3)
Equation 3 is the simplest form of Q-learning proposed by Watkins (1989). For well formulated problems, limt→∞Qt(s, u) = Q∗(s, u), as long as each transition is sampled infinitely many times (Bertsekas, 2015). The update equation has two parameters: (i) α is a Q-learning rate which determines the weight given to new information over old information, and (ii) γ is the discount factor which determines the weight given to short-term rewards over future rewards. The Q-learning algorithm is model-free, in that the learning agent can solve the task without ever explicitly constructing an estimate of environmental dynamics. In addition, Q-learning is off policy, meaning it can learn about optimal policies while exploring via a non-optimal behavioral distribution, i.e. the distribution by which the agent explores its environment.
We choose the behavior distribution using an -greedy strategy (Mnih et al., 2015). With this strategy, a random action is taken with probability and the greedy action, maxu∈U(si)Qt(si, u), is chosen with probability 1− . We anneal from 1→ 0 such that the agent begins in an exploration phase and slowly starts moving towards the exploitation phase. In addition, when the exploration cost is large (which is true for our problem setting), it is beneficial to use the experience replay technique for faster convergence (Lin, 1992). In experience replay, the learning agent is provided with a memory of its past explored paths and rewards. At a given interval, the agent samples from the memory and updates its Q-values via Equation 3.
4 DESIGNING NEURAL NETWORK ARCHITECTURES WITH Q-LEARNING
We consider the task of training a learning agent to sequentially choose neural network layers. Figure 2 shows feasible state and action spaces (a) and a potential trajectory the agent may take along with the CNN architecture defined by this trajectory (b). We model the layer selection process as a Markov Decision Process with the assumption that a well-performing layer in one network should
also perform well in another network. We make this assumption based on the hierarchical nature of the feature representations learned by neural networks with many hidden layers (LeCun et al., 2015). The agent sequentially selects layers via the -greedy strategy until it reaches a termination state. The CNN architecture defined by the agent’s path is trained on the chosen learning problem, and the agent is given a reward equal to the validation accuracy. The validation accuracy and architecture description are stored in a replay memory, and experiences are sampled periodically from the replay memory to update Q-values via Equation 3. The agent follows an schedule which determines its shift from exploration to exploitation.
Our method requires three main design choices: (i) reducing CNN layer definitions to simple state tuples, (ii) defining a set of actions the agent may take, i.e., the set of layers the agent may pick next given its current state, and (iii) balancing the size of the state-action space—and correspondingly, the model capacity—with the amount of exploration needed by the agent to converge. We now describe the design choices and the learning process in detail.
4.1 THE STATE SPACE
Each state is defined as a tuple of all relevant layer parameters. We allow five different types of layers: convolution (C), pooling (P), fully connected (FC), global average pooling (GAP), and softmax (SM), though the general method is not limited to this set. Table 1 shows the relevant parameters for each layer type and also the discretization we chose for each parameter. Each layer has a parameter layer depth (shown as Layer 1, 2, ... in Figure 2). Adding layer depth to the state space allows us to constrict the action space such that the state-action graph is directed and acyclic (DAG) and also allows us to specify a maximum number of layers the agent may select before terminating.
Each layer type also has a parameter called representation size (R-size). Convolutional nets progressively compress the representation of the original signal through pooling and convolution. The presence of these layers in our state space may lead the agent on a trajectory where the intermediate signal representation gets reduced to a size that is too small for further processing. For example, five 2× 2 pooling layers each with stride 2 will reduce an image of initial size 32× 32 to size 1× 1. At this stage, further pooling, or convolution with receptive field size greater than 1, would be meaningless and degenerate. To avoid such scenarios, we add the R-size parameter to the state tuple s, which allows us to restrict actions from states with R-size n to those that have a receptive field size less than or equal to n. To further constrict the state space, we chose to bin the representation sizes into three discrete buckets. However, binning adds uncertainty to the state transitions: depending on the true underlying representation size, a pooling layer may or may not change the R-size bin. As a result, the action of pooling can lead to two different states, which we model as stochasticity in state transitions. Please see Figure A1 in appendix for an illustrated example.
4.2 THE ACTION SPACE
We restrict the agent from taking certain actions to both limit the state-action space and make learning tractable. First, we allow the agent to terminate a path at any point, i.e. it may choose a termination state from any non-termination state. In addition, we only allow transitions for a state with layer depth i to a state with layer depth i + 1, which ensures that there are no loops in the graph. This constraint ensures that the state-action graph is always a DAG. Any state at the maximum layer depth, as prescribed in Table 1, may only transition to a termination layer.
Next, we limit the number of fully connected (FC) layers to be at maximum two, because a large number of FC layers can lead to too may learnable parameters. The agent at a state with type FC may transition to another state with type FC if and only if the number of consecutive FC states is less than the maximum allowed. Furthermore, a state s of type FC with number of neurons d may only transition to either a termination state or a state s′ of type FC with number of neurons d′ ≤ d. An agent at a state of type convolution (C) may transition to a state with any other layer type. An agent at a state with layer type pooling (P) may transition to a state with any other layer type other than another P state because consecutive pooling layers are equivalent to a single, larger pooling layer which could lie outside of our chosen state space. Furthermore, only states with representation size in bins (8, 4] and (4, 1] may transition to an FC layer, which ensures that the number of weights does not become unreasonably huge. Note that a majority of these constraints are in place to enable faster convergence on our limited hardware (see Section 5) and not a limitation of the method in itself.
4.3 Q-LEARNING TRAINING PROCEDURE
For the iterativeQ-learning updates (Equation 3), we set theQ-learning rate (α) to 0.01. In addition, we set the discount factor (γ) to 1 to not over-prioritize short-term rewards. We decrease from 1.0 to 0.1 in steps, where the step-size is defined by the number of unique models trained (Table 2). At = 1.0, the agent samples CNN architecture with a random walk along a uniformly weighted Markov chain. Every topology sampled by the agent is trained using the procedure described in Section 5, and the prediction performance of this network topology on the validation set is recorded. We train a larger number of models at = 1.0 as compared to other values of to ensure that the agent has adequate time to explore before it begins to exploit. We stop the agent at = 0.1 (and not at = 0) to obtain a stochastic final policy, which generates perturbations of the global minimum.2 Ideally, we want to identify several well-performing model topologies, which can then be ensembled to improve prediction performance.
During the entire training process (starting at = 1.0), we maintain a replay dictionary which stores (i) the network topology and (ii) prediction performance on a validation set, for all of the sampled
2 = 0 indicates a completely deterministic policy. Because we would like to generate several good models for ensembling and analysis, we stop at = 0.1, which represents a stochastic final policy.
models. If a model that has already been trained is re-sampled, it is not re-trained, but instead the previously found validation accuracy is presented to the agent. After each model is sampled and trained, the agent randomly samples 100 models from the replay dictionary and applies the Q-value update defined in Equation 3 for all transitions in each sampled sequence. The Q-value update is applied to the transitions in temporally reversed order, which has been shown to speed up Q-values convergence (Lin, 1993).
5 EXPERIMENT DETAILS
During the model exploration phase, we trained each network topology with a quick and aggressive training scheme. For each experiment, we created a validation set by randomly taking 5,000 samples from the training set such that the resulting class distributions were unchanged. For every network, a dropout layer was added after every two layers. The ith dropout layer, out of a total n dropout layers, had a dropout probability of i2n . Each model was trained for a total of 20 epochs with the Adam optimizer (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.999, ε = 10−8. The batch size was set to 128, and the initial learning rate was set to 0.001. If the model failed to perform better than a random predictor after the first epoch, we reduced the learning rate by a factor of 0.4 and restarted training, for a maximum of 5 restarts. For models that started learning (i.e., performed better than a random predictor), we reduced the learning rate by a factor of 0.2 every 5 epochs. All weights were initialized with Xavier initialization (Glorot & Bengio, 2010). Our experiments using Caffe (Jia et al., 2014) took 8-10 days to complete for each dataset with a hardware setup consisting of 10 NVIDIA GPUs.
After the agent completed the schedule (Table 2), we selected the top ten models that were found over the course of exploration. These models were then finetuned using a much longer training schedule, and only the top five were used for ensembling. We now provide details of the datasets and the finetuning process.
The Street View House Numbers (SVHN) dataset has 10 classes with a total of 73,257 samples in the original training set, 26,032 samples in the test set, and 531,131 additional samples in the extended training set. During the exploration phase, we only trained with the original training set, using 5,000 random samples as validation. We finetuned the top ten models with the original plus extended training set, by creating preprocessed training and validation sets as described by Lee et al. (2016). Our final learning rate schedule after tuning on validation set was 0.025 for 5 epochs, 0.0125 for 5 epochs, 0.0001 for 20 epochs, and 0.00001 for 10 epochs.
CIFAR-10, the 10 class tiny image dataset, has 50,000 training samples and 10,000 testing samples. During the exploration phase, we took 5,000 random samples from the training set for validation. The maximum layer depth was increased to 18. After the experiment completed, we used the same validation set to tune hyperparameters, resulting in a final training scheme which we ran on the entire training set. In the final training scheme, we set a learning rate of 0.025 for 40 epochs, 0.0125 for 40 epochs, 0.0001 for 160 epochs, and 0.00001 for 60 epochs, with all other parameters unchanged. During this phase, we preprocess using global contrast normalization and use moderate data augmentation, which consists of random mirroring and random translation by up to 5 pixels.
MNIST, the 10 class handwritten digits dataset, has 60,000 training samples and 10,000 testing samples. We preprocessed each image with global mean subtraction. In the final training scheme, we trained each model for 40 epochs and decreased learning rate every 5 epochs by a factor of 0.2. For further tuning details please see Appendix C.
6 RESULTS
Model Selection Analysis: From Q-learning principles, we expect the learning agent to improve in its ability to pick network topologies as reduces and the agent enters the exploitation phase. In
Figure 3, we plot the rolling mean of prediction accuracy over 100 models and the mean accuracy of models sampled at different values, for the CIFAR-10 and SVHN experiments. The plots show that, while the prediction accuracy remains flat during the exploration phase ( = 1) as expected, the agent consistently improves in its ability to pick better-performing models as reduces from 1 to 0.1. For example, the mean accuracy of models in the SVHN experiment increases from 52.25% at = 1 to 88.02% at = 0.1. Furthermore, we demonstrate the stability of the Q-learning procedure with 10 independent runs on a subset of the SVHN dataset in Section D.1 of the Appendix. Additional analysis of Q-learning results can be found in Section D.2.
The top models selected by the Q-learning agent vary in the number of parameters but all demonstrate high performance (see Appendix Tables 1-3). For example, the number of parameters for the top five CIFAR-10 models range from 11.26 million to 1.10 million, with only a 2.32% decrease in test error. We find design motifs common to the top hand-crafted network architectures as well. For example, the agent often chooses a layer of type C(N, 1, 1) as the first layer in the network. These layers generate N learnable linear transformations of the input data, which is similar in spirit to preprocessing of input data from RGB to a different color spaces such as YUV, as found in prior work (Sermanet et al., 2012; 2013).
Prediction Performance: We compare the prediction performance of the MetaQNN networks discovered by theQ-learning agent with state-of-the-art methods on three datasets. We report the accuracy of our best model, along with an ensemble of top five models. First, we compare MetaQNN with six existing architectures that are designed with standard convolution, pooling, and fully-connected layers alone, similar to our designs. As seen in Table 3, our top model alone, as well as the committee ensemble of five models, outperforms all similar models. Next, we compare our results with six top networks overall, which contain complex layer types and design ideas, including generalized pooling functions, residual connections, and recurrent modules. Our results are competitive with these methods as well (Table 4). Finally, our method outperforms existing automated network de-
sign methods. MetaQNN obtains an error of 6.92% as compared to 21.2% reported by Bergstra et al. (2011) on CIFAR-10; and it obtains an error of 0.32% as compared to 7.9% reported by Verbancsics & Harguess (2013) on MNIST.
The difference in validation error between the top 10 models for MNIST was very small, so we also created an ensemble with all 10 models. This ensemble achieved a test error of 0.28%—which beats the current state-of-the-art on MNIST without data augmentation.
The best CIFAR-10 model performs 1-2% better than the four next best models, which is why the ensemble accuracy is lower than the best model’s accuracy. We posit that the CIFAR-10 MetaQNN did not have adequate exploration time given the larger state space compared to that of the SVHN experiment, causing it to not find more models with performance similar to the best model. Furthermore, the coarse training scheme could have been not as well suited for CIFAR-10 as it was for SVHN, causing some models to under perform.
Transfer Learning Ability: Network designs such as VGGnet (Simonyan & Zisserman, 2014) can be adopted to solve a variety of computer vision problems. To check if the MetaQNN networks provide similar transfer learning ability, we use the best MetaQNN model on the CIFAR-10 dataset for training other computer vision tasks. The model performs well (Table 5) both when training from random initializations, and finetuning from existing weights.
7 CONCLUDING REMARKS
Neural networks are being used in an increasingly wide variety of domains, which calls for scalable solutions to produce problem-specific model architectures. We take a step towards this goal and show that a meta-modeling approach using reinforcement learning is able to generate tailored CNN designs for different image classification tasks. Our MetaQNN networks outperform previous metamodeling methods as well as hand-crafted networks which use the same types of layers.
While we report results for image classification problems, our method could be applied to different problem settings, including supervised (e.g., classification, regression) and unsupervised (e.g., autoencoders). The MetaQNN method could also aid constraint-based network design, by optimizing parameters such as size, speed, and accuracy. For instance, one could add a threshold in the state-action space barring the agent from creating models larger than the desired limit. In addition,
∗Results in this column obtained with the top MetaQNN architecture for CIFAR-10, trained from random initialization with CIFAR-100 data.
one could modify the reward function to penalize large models for constraining memory or penalize slow forward passes to incentivize quick inference.
There are several future avenues for research in reinforcement learning-driven network design as well. In our current implementation, we use the same set of hyperparameters to train all network topologies during the Q-learning phase and further finetune the hyperparameters for top models selected by the MetaQNN agent. However, our approach could be combined with hyperparameter optimization methods to further automate the network design process. Moreover, we constrict the state-action space using coarse, discrete bins to accelerate convergence. It would be possible to move to larger state-action spaces using methods for Q-function approximation (Bertsekas, 2015; Mnih et al., 2015).
ACKNOWLEDGMENTS
We thank Peter Downs for creating the project website and contributing to illustrations. We acknowledge Center for Bits and Atoms at MIT for their help with computing resources. Finally, we thank members of Camera Culture group at MIT Media Lab for their help and support.
A ALGORITHM
We first describe the main components of the MetaQNN algorithm. Algorithm 1 shows the main loop, where the parameter M would determine how many models to run for a given and the parameter K would determine how many times to sample the replay database to update Q-values on each iteration. The function TRAIN refers to training the specified network and returns a validation accuracy. Algorithm 2 details the method for sampling a new network using the -greedy strategy, where we assume we have a function TRANSITION that returns the next state given a state and action. Finally, Algorithm 3 implements theQ-value update detailed in Equation 3, with discounting factor set to 1, for an entire state sequence in temporally reversed order.
Algorithm 1 Q-learning For CNN Topologies Initialize:
replay memory← [ ] Q← {(s, u) ∀s ∈ S, u ∈ U(s) : 0.5}
for episode = 1 to M do S, U ← SAMPLE NEW NETWORK( , Q) accuracy← TRAIN(S) replay memory.append((S, U, accuracy)) for memory = 1 to K do
SSAMPLE , USAMPLE , accuracySAMPLE ← Uniform{replay memory} Q← UPDATE Q VALUES(Q, SSAMPLE , USAMPLE , accuracySAMPLE)
end for end for
Algorithm 2 SAMPLE NEW NETWORK( , Q) Initialize:
state sequence S = [sSTART] action sequence U = [ ]
while U [−1] 6= terminate do α ∼ Uniform[0, 1) if α > then
u = argmaxu∈U(S[−1])Q[(S[−1], u)] s′ = TRANSITION(S[−1], u)
else u ∼ Uniform{U(S[−1])} s′ = TRANSITION(S[−1], u) end if U.append(u) if u != terminate then
S.append(s′) end if
end while return S, U
Algorithm 3 UPDATE Q VALUES(Q, S, U , accuracy) Q[S[−1], U [−1]] = (1− α)Q[S[−1], U [−1]] + α · accuracy for i = length(S)− 2 to 0 do
Q[S[i], U [i]] = (1− α)Q[S[i], U [i]] + αmaxu∈U(S[i+1])Q[S[i+ 1], u] end for return Q
B REPRESENTATION SIZE BINNING
As mentioned in Section 4.1 of the main text, we introduce a parameter called representation size to prohibit the agent from taking actions that can reduce the intermediate signal representation to a size that is too small for further processing. However, this process leads to uncertainties in state transitions, as illustrated in Figure A1, which is handled by the standard Q-learning formulation.
P(2,2)
R-size: 18 R-size bin: 1
R-size: 9 R-size bin: 1
(a)
P(2,2)
R-size: 7 R-size bin: 2
R-size: 14 R-size bin: 1
(b)
States Actions
p 1 2 p
R-size bin: 1
R-size bin: 1 R-size bin: 2
P(2,2)
(c)
Figure A1: Representation size binning: In this figure, we show three example state transitions. The true representation size (R-size) parameter is included in the figure to show the true underlying state. Assuming there are two R-size bins, R-size Bin1: [8,∞) and R-size Bin2: (0, 7], Figure A1a shows the case where the initial state is in R-size Bin1 and true representation size is 18. After the agent chooses to pool with a 2×2 filter with stride 2, the true representation size reduces to 9 but the R-size bin does not change. In Figure A1b, the same 2 × 2 pooling layer with stride 2 reduces the actual representation size of 14 to 7, but the bin changes to R-size Bin2. Therefore, in figures A1a and A1b, the agent ends up in different final states, despite originating in the same initial state and choosing the same action. Figure A1c shows that in our state-action space, when the agent takes an action that reduces the representation size, it will have uncertainty in which state it will transition to.
C MNIST EXPERIMENT
We noticed that the final MNIST models were prone to overfitting, so we increased dropout and did a small grid search for the weight regularization parameter. For both tuning and final training, we warmed the model with the learned weights from after the first epoch of initial training. The final models and solvers can be found on our project website https://bowenbaker.github.io/metaqnn/ . Figure A2 shows the Q-Learning performance for the MNIST experiment.
D FURTHER ANALYSIS OF Q-LEARNING
Figure 3 of the main text and Figure A2 show that as the agent begins to exploit, it improves in architecture selection. It is also informative to look at the distribution of models chosen at each . Figure A4 gives further insight into the performance achieved at each for both experiments.
D.1 Q-LEARNING STABILITY
Because the Q-learning agent explores via a random or semi-random distribution, it is natural to ask whether the agent can consistently improve architecture performance. While the success of the three independent experiments described in the main text allude to stability, here we present further evidence. We conduct 10 independent runs of the Q-learning procedure on 10% of the SVHN dataset (which corresponds to ∼7,000 training examples). We use a smaller dataset to reduce the computation time of each independent run to 10GPU-days, as opposed to the 100GPU-days it would take on the full dataset. As can be seen in Figure A3, the Q-learning procedure with the exploration schedule detailed in Table 2 is fairly stable. The standard deviation at = 1 is notably smaller than at other stages, which we attribute to the large difference in number of samples at each stage.
0 500 1000 1500 2000 2500 3000 3500 Iterations
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
A cc
u ra
cy
Epsilon = 1.0 .9.8.7 .6 .5 .4 .3 .2 .1
MNIST Q-Learning Performance
Average Accuracy Per Epsilon Rolling Mean Model Accuracy
Figure A2: MNIST Q-Learning Performance. The blue line shows a rolling mean of model accuracy versus iteration, where in each iteration of the algorithm the agent is sampling a model. Each bar (in light blue) marks the average accuracy over all models that were sampled during the exploration phase with the labeled . As decreases, the average accuracy goes up, demonstrating that the agent learns to select better-performing CNN architectures.
0.10.20.30.40.50.60.70.80.91.0 Epsilon
0.45
0.50
0.55
0.60
0.65
0.70
0.75
0.80
M ea
n Ac
cu ra
cy
Q-Learning Stability (Across 10 Runs)
(a)
0.10.20.30.40.50.60.70.80.91.0 Epsilon
0.45
0.50
0.55
0.60
0.65
0.70
0.75 0.80 M ea n Ac cu ra cy
Q-Learning Individual Runs
(b)
Figure A3: Figure A3a shows the mean model accuracy and standard deviation at each over 10 independent runs of the Q-learning procedure on 10% of the SVHN dataset. Figure A3b shows the mean model accuracy at each for each independent experiment. Despite some variance due to a randomized exploration strategy, each independent run successfully improves architecture performance.
Furthermore, the best model found during each run had remarkably similar performance with a mean accuracy of 88.25% and standard deviation of 0.58%, which shows that each run successfully found at least one very high performing model. Note that we did not use an extended training schedule to improve performance in this experiment.
D.2 Q-VALUE ANALYSIS
We now analyze the actualQ-values generated by the agent during the training process. The learning agent iteratively updates the Q-values of each path during the -greedy exploration. Each Q-value is initialized at 0.5. After the -schedule is complete, we can analyze the final Q-value associated with each path to gain insights into the layer selection process. In the left column of Figure A5, we plot the average Q-value for each layer type at different layer depths (for both SVHN and CIFAR10) datasets. Roughly speaking, a higher Q-value associated with a layer type indicates a higher probability that the agent will pick that layer type. In Figure A5, we observe that, while the average Q-value is higher for convolution and pooling layers at lower layer depths, the Q-values for fullyconnected and termination layers (softmax and global average pooling) increase as we go deeper into the network. This observation matches with traditional network designs.
We can also plot the averageQ-values associated with different layer parameters for further analysis. In the right column of Figure A5, we plot the averageQ-values for convolution layers with receptive
field sizes 1, 3, and 5 at different layer depths. The plots show that layers with receptive field size of 5 have a higher Q-value as compared to sizes 1 and 3 as we go deeper into the networks. This indicates that it might be beneficial to use larger receptive field sizes in deeper networks.
In summary, the Q-learning method enables us to perform analysis on the relative benefits of different design parameters of our state space, and possibly gain insights for new CNN designs.
E TOP TOPOLOGIES SELECTED BY ALGORITHM
In Tables A1 through A3, we present the top five model architectures selected with Q-learning for each dataset, along with their prediction error reported on the test set, and their total number of parameters. To download the Caffe solver and prototext files, please visit https://bowenbaker.github.io/metaqnn/ .
Model Architecture Test Error (%) # Params (106) [C(512,5,1), C(256,3,1), C(256,5,1), C(256,3,1), P(5,3), C(512,3,1), C(512,5,1), P(2,2), SM(10)] 6.92 11.18 [C(128,1,1), C(512,3,1), C(64,1,1), C(128,3,1), P(2,2), C(256,3,1), P(2,2), C(512,3,1), P(3,2), SM(10)] 8.78 2.17 [C(128,3,1), C(128,1,1), C(512,5,1), P(2,2), C(128,3,1), P(2,2), C(64,3,1), C(64,5,1), SM(10)] 8.88 2.42 [C(256,3,1), C(256,3,1), P(5,3), C(256,1,1), C(128,3,1), P(2,2), C(128,3,1), SM(10)] 9.24 1.10 [C(128,5,1), C(512,3,1), P(2,2), C(128,1,1), C(128,5,1), P(3,2), C(512,3,1), SM(10)] 11.63 1.66
Table A1: Top 5 model architectures: CIFAR-10.
Model Architecture Test Error (%) # Params (106) [C(128,3,1), P(2,2), C(64,5,1), C(512,5,1), C(256,3,1), C(512,3,1), P(2,2), C(512,3,1), C(256,5,1), C(256,3,1), C(128,5,1), C(64,3,1), SM(10)] 2.24 9.81 [C(128,1,1), C(256,5,1), C(128,5,1), P(2,2), C(256,5,1), C(256,1,1), C(256,3,1), C(256,3,1), C(256,5,1), C(512,5,1), C(256,3,1), C(128,3,1), SM(10)] 2.28 10.38 [C(128,5,1), C(128,3,1), C(64,5,1), P(5,3), C(128,3,1), C(512,5,1), C(256,5,1), C(128,5,1), C(128,5,1), C(128,3,1), SM(10)] 2.32 6.83 [C(128,1,1), C(256,5,1), C(128,5,1), C(256,3,1), C(256,5,1), P(2,2), C(128,1,1), C(512,3,1), C(256,5,1), P(2,2), C(64,5,1), C(64,1,1), SM(10)] 2.35 6.99 [C(128,1,1), C(256,5,1), C(128,5,1), C(256,5,1), C(256,5,1), C(256,1,1), P(3,2), C(128,1,1), C(256,5,1), C(512,5,1), C(256,3,1), C(128,3,1), SM(10)] 2.36 10.05
Table A2: Top 5 model architectures: SVHN. Note that we do not report the best accuracy on test set from the above models in Tables 3 and 4 from the main text. This is because the model that achieved 2.28% on the test set performed the best on the validation set.
Model Architecture Test Error (%) # Params (106) [C(64,1,1), C(256,3,1), P(2,2), C(512,3,1), C(256,1,1), P(5,3), C(256,3,1), C(512,3,1), FC(512), SM(10)] 0.35 5.59 [C(128,3,1), C(64,1,1), C(64,3,1), C(64,5,1), P(2,2), C(128,3,1), P(3,2), C(512,3,1), FC(512), FC(128), SM(10)] 0.38 7.43 [C(512,1,1), C(128,3,1), C(128,5,1), C(64,1,1), C(256,5,1), C(64,1,1), P(5,3), C(512,1,1), C(512,3,1), C(256,3,1), C(256,5,1), C(256,5,1), SM(10)] 0.40 8.28 [C(64,3,1), C(128,3,1), C(512,1,1), C(256,1,1), C(256,5,1), C(128,3,1), P(5,3), C(512,1,1), C(512,3,1), C(128,5,1), SM(10)] 0.41 6.27 [C(64,3,1), C(128,1,1), P(2,2), C(256,3,1), C(128,5,1), C(64,1,1), C(512,5,1), C(128,5,1), C(64,1,1), C(512,5,1), C(256,5,1), C(64,5,1), SM(10)] 0.43 8.10 [C(64,1,1), C(256,5,1), C(256,5,1), C(512,1,1), C(64,3,1), P(5,3), C(256,5,1), C(256,5,1), C(512,5,1), C(64,1,1), C(128,5,1), C(512,5,1), SM(10)] 0.44 9.67 [C(128,3,1), C(512,3,1), P(2,2), C(256,3,1), C(128,5,1), C(64,1,1), C(64,5,1), C(512,5,1), GAP(10), SM(10)] 0.44 3.52 [C(256,3,1), C(256,5,1), C(512,3,1), C(256,5,1), C(512,1,1), P(5,3), C(256,3,1), C(64,3,1), C(256,5,1), C(512,3,1), C(128,5,1), C(512,5,1), SM(10)] 0.46 12.42 [C(512,5,1), C(128,5,1), C(128,5,1), C(128,3,1), C(256,3,1), C(512,5,1), C(256,3,1), C(128,3,1), SM(10)] 0.55 7.25 [C(64,5,1), C(512,5,1), P(3,2), C(256,5,1), C(256,3,1), C(256,3,1), C(128,1,1), C(256,3,1), C(256,5,1), C(64,1,1), C(256,3,1), C(64,3,1), SM(10)] 0.56 7.55
Table A3: Top 10 model architectures: MNIST. We report the top 10 models for MNIST because we included all 10 in our final ensemble. Note that we do not report the best accuracy on test set from the above models in Tables 3 and 4 from the main text. This is because the model that achieved 0.44% on the test set performed the best on the validation set.
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Validation Accuracy
0
10
20
30
40
50
60
% M
o d e ls
Model Accuracy Distribution (SVHN)
epsilon
0.1 0.2 0.3 0.4 0.5
0.6 0.7 0.8 0.9 1.0
(a)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Validation Accuracy
0
10
20
30
40
50
60
% M
o d e ls
Model Accuracy Distribution (SVHN)
epsilon
0.1 1.0
(b)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Validation Accuracy
0
5
10
15
20
% M
o d e ls
Model Accuracy Distribution (CIFAR-10)
epsilon
0.1 0.2 0.3 0.4 0.5
0.6 0.7 0.8 0.9 1.0
(c)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Validation Accuracy
0
5
10
15 20 % M o d e ls
Model Accuracy Distribution (CIFAR-10)
epsilon
0.1 1.0
(d)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Validation Accuracy
0
20
40
60
80
100
% M
od el
s
Model Accuracy Distribution (MNIST)
epsilon 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
(e)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Validation Accuracy
0
20
40
60
80
100
% M
od el
s
Model Accuracy Distribution (MNIST)
epsilon 0.1 1.0
(f)
Figure A4: Accuracy Distribution versus : Figures A4a, A4c, and A4e show the accuracy distribution for each for the SVHN, CIFAR-10, and MNIST experiments, respectively. Figures A4b, A4d, and A4f show the accuracy distributions for the initial = 1 and the final = 0.1. One can see that the accuracy distribution becomes much more peaked in the high accuracy ranges at small for each experiment.
0 2 4 6 8 10 12 14 Layer Depth
0.0
0.2
0.4
0.6
0.8
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth (SVHN)
Convolution Fully Connected Pooling Global Average Pooling Softmax
(a)
0 2 4 6 8 10 12 Layer Depth
0.5
0.6
0.7
0.8
0.9
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth for Convolution Layers (SVHN)
Receptive Field Size 1 Receptive Field Size 3 Receptive Field Size 5
(b)
0 5 10 15 20 Layer Depth
0.0
0.2
0.4
0.6
0.8
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth (CIFAR10)
Convolution Fully Connected Pooling Global Average Pooling Softmax
(c)
0 2 4 6 8 10 12 14 16 18 Layer Depth
0.5
0.6
0.7
0.8
0.9 1.0 A ve ra ge Q -V al ue
Average Q-Value vs. Layer Depth for Convolution Layers (CIFAR10)
Receptive Field Size 1 Receptive Field Size 3 Receptive Field Size 5
(d)
0 2 4 6 8 10 12 14 Layer Depth
0.0
0.2
0.4
0.6
0.8
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth (MNIST)
Convolution Fully Connected Pooling Global Average Pooling Softmax
(e)
0 2 4 6 8 10 12 Layer Depth
0.5
0.6
0.7
0.8
0.9
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth for Convolution Layers (MNIST)
Receptive Field Size 1 Receptive Field Size 3 Receptive Field Size 5
(f)
Figure A5: Average Q-Value versus Layer Depth for different layer types are shown in the left column. Average Q-Value versus Layer Depth for different receptive field sizes of the convolution layer are shown in the right column. | 1. How does the reviewer assess the paper's overall quality and potential for publication?
2. What is the primary concern regarding the proposed method's computational cost and scalability?
3. Which specific experiments or comparisons would help demonstrate the method's effectiveness in low-data regimes or with unconventional datasets?
4. Is there any suggestion for improving the paper's content, such as mentioning relevant works (e.g., ResNets) in the table? | Review | Review
The paper looks solid and the idea is natural. Results seem promising as well.
I am mostly concerned about the computational cost of the method. 8-10 days on 10 GPUs for relatively tiny datasets is quite prohibitive for most applications I would ever encounter.
I think the main question is how this approach scales to larger images and also when applied to more exotic and possibly tiny datasets. Can you run an experiment on Caltech-101 for instance? I would be very curious to see if your approach is suitable for the low-data regime and areas where we all do not know right away how a suitable architecture looks like. For Cifar-10/100, MNIST and SVHN, everyone knows very well what a reasonable model initialization looks like.
If you show proof that you can discover a competitive architecture for something like Caltech-101, I would recommend the paper for publication.
Minor:
- ResNets should be mentioned in Table |
ICLR | Title
Designing Neural Network Architectures using Reinforcement Learning
Abstract
At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.
1 INTRODUCTION
Deep convolutional neural networks (CNNs) have seen great success in the past few years on a variety of machine learning problems (LeCun et al., 2015). A typical CNN architecture consists of several convolution, pooling, and fully connected layers. While constructing a CNN, a network designer has to make numerous design choices: the number of layers of each type, the ordering of layers, and the hyperparameters for each type of layer, e.g., the receptive field size, stride, and number of receptive fields for a convolution layer. The number of possible choices makes the design space of CNN architectures extremely large and hence, infeasible for an exhaustive manual search. While there has been some work (Pinto et al., 2009; Bergstra et al., 2013; Domhan et al., 2015) on automated or computer-aided neural network design, new CNN architectures or network design elements are still primarily developed by researchers using new theoretical insights or intuition gained from experimentation.
In this paper, we seek to automate the process of CNN architecture selection through a metamodeling procedure based on reinforcement learning. We construct a novel Q-learning agent whose goal is to discover CNN architectures that perform well on a given machine learning task with no human intervention. The learning agent is given the task of sequentially picking layers of a CNN model. By discretizing and limiting the layer parameters to choose from, the agent is left with a finite but large space of model architectures to search from. The agent learns through random exploration and slowly begins to exploit its findings to select higher performing models using the - greedy strategy (Mnih et al., 2015). The agent receives the validation accuracy on the given machine learning task as the reward for selecting an architecture. We expedite the learning process through repeated memory sampling using experience replay (Lin, 1993). We refer to this Q-learning based meta-modeling method as MetaQNN, which is summarized in Figure 1.1
We conduct experiments with a space of model architectures consisting of only standard convolution, pooling, and fully connected layers using three standard image classification datasets: CIFAR-10,
1For more information, model files, and code, please visit https://bowenbaker.github.io/metaqnn/
SVHN, and MNIST. The learning agent discovers CNN architectures that beat all existing networks designed only with the same layer types (e.g., Springenberg et al. (2014); Srivastava et al. (2015)). In addition, their performance is competitive against network designs that include complex layer types and training procedures (e.g., Clevert et al. (2015); Lee et al. (2016)). Finally, the MetaQNN selected models comfortably outperform previous automated network design methods (Stanley & Miikkulainen, 2002; Bergstra et al., 2013). The top network designs discovered by the agent on one dataset are also competitive when trained on other datasets, indicating that they are suited for transfer learning tasks. Moreover, we can generate not just one, but several varied, well-performing network designs, which can be ensembled to further boost the prediction performance.
2 RELATED WORK
Designing neural network architectures: Research on automating neural network design goes back to the 1980s when genetic algorithm-based approaches were proposed to find both architectures and weights (Schaffer et al., 1992). However, to the best of our knowledge, networks designed with genetic algorithms, such as those generated with the NEAT algorithm (Stanley & Miikkulainen, 2002), have been unable to match the performance of hand-crafted networks on standard benchmarks (Verbancsics & Harguess, 2013). Other biologically inspired ideas have also been explored; motivated by screening methods in genetics, Pinto et al. (2009) proposed a high-throughput network selection approach where they randomly sample thousands of architectures and choose promising ones for further training. In recent work, Saxena & Verbeek (2016) propose to sidestep the architecture selection process through densely connected networks of layers, which come closer to the performance of hand-crafted networks.
Bayesian optimization has also been used (Shahriari et al., 2016) for automatic selection of network architectures (Bergstra et al., 2013; Domhan et al., 2015) and hyperparameters (Snoek et al., 2012; Swersky et al., 2013). Notably, Bergstra et al. (2013) proposed a meta-modeling approach based on Tree of Parzen Estimators (TPE) (Bergstra et al., 2011) to choose both the type of layers and hyperparameters of feed-forward networks; however, they fail to match the performance of handcrafted networks.
Reinforcement Learning: Recently there has been much work at the intersection of reinforcement learning and deep learning. For instance, methods using CNNs to approximate theQ-learning utility function (Watkins, 1989) have been successful in game-playing agents (Mnih et al., 2015; Silver et al., 2016) and robotic control (Lillicrap et al., 2015; Levine et al., 2016). These methods rely on phases of exploration, where the agent tries to learn about its environment through sampling, and exploitation, where the agent uses what it learned about the environment to find better paths. In traditional reinforcement learning settings, over-exploration can lead to slow convergence times, yet over-exploitation can lead to convergence to local minima (Kaelbling et al., 1996). However, in the case of large or continuous state spaces, the -greedy strategy of learning has been empirically shown to converge (Vermorel & Mohri, 2005). Finally, when the state space is large or exploration is costly,
the experience replay technique (Lin, 1993) has proved useful in experimental settings (Adam et al., 2012; Mnih et al., 2015). We incorporate these techniques—Q-learning, the -greedy strategy and experience replay—in our algorithm design.
3 BACKGROUND
Our method relies on Q-learning, a type of reinforcement learning. We now summarize the theoretical formulation of Q-learning, as adopted to our problem. Consider the task of teaching an agent to find optimal paths as a Markov Decision Process (MDP) in a finite-horizon environment. Constraining the environment to be finite-horizon ensures that the agent will deterministically terminate in a finite number of time steps. In addition, we restrict the environment to have a discrete and finite state space S as well as action space U . For any state si ∈ S , there is a finite set of actions, U(si) ⊆ U , that the agent can choose from. In an environment with stochastic transitions, an agent in state si taking some action u ∈ U(si) will transition to state sj with probability ps′|s,u(sj |si, u), which may be unknown to the agent. At each time step t, the agent is given a reward rt, dependent on the transition from state s to s′ and action u. rt may also be stochastic according to a distribution pr|s′,s,u. The agent’s goal is to maximize the total expected reward over all possible trajectories, i.e., maxTi∈T RTi , where the total expected reward for a trajectory Ti is
RTi = ∑ (s,u,s′)∈Ti Er|s,u,s′ [r|s, u, s ′]. (1)
Though we limit the agent to a finite state and action space, there are still a combinatorially large number of trajectories, which motivates the use of reinforcement learning. We define the maximization problem recursively in terms of subproblems as follows. For any state si ∈ S and subsequent action u ∈ U(si), we define the maximum total expected reward to be Q∗(si, u). Q∗(·) is known as the action-value function and individual Q∗(si, u) are know as Q-values. The recursive maximization equation, which is known as Bellman’s Equation, can be written as
Q∗(si, u) = Esj |si,u [ Er|si,u,sj [r|si, u, sj ] + γmaxu′∈U(sj)Q∗(sj , u′) ] . (2)
In many cases, it is impossible to analytically solve Bellman’s Equation (Bertsekas, 2015), but it can be formulated as an iterative update
Qt+1(si, u) = (1− α)Qt(si, u) + α [ rt + γmaxu′∈U(sj)Qt(sj , u ′) ] . (3)
Equation 3 is the simplest form of Q-learning proposed by Watkins (1989). For well formulated problems, limt→∞Qt(s, u) = Q∗(s, u), as long as each transition is sampled infinitely many times (Bertsekas, 2015). The update equation has two parameters: (i) α is a Q-learning rate which determines the weight given to new information over old information, and (ii) γ is the discount factor which determines the weight given to short-term rewards over future rewards. The Q-learning algorithm is model-free, in that the learning agent can solve the task without ever explicitly constructing an estimate of environmental dynamics. In addition, Q-learning is off policy, meaning it can learn about optimal policies while exploring via a non-optimal behavioral distribution, i.e. the distribution by which the agent explores its environment.
We choose the behavior distribution using an -greedy strategy (Mnih et al., 2015). With this strategy, a random action is taken with probability and the greedy action, maxu∈U(si)Qt(si, u), is chosen with probability 1− . We anneal from 1→ 0 such that the agent begins in an exploration phase and slowly starts moving towards the exploitation phase. In addition, when the exploration cost is large (which is true for our problem setting), it is beneficial to use the experience replay technique for faster convergence (Lin, 1992). In experience replay, the learning agent is provided with a memory of its past explored paths and rewards. At a given interval, the agent samples from the memory and updates its Q-values via Equation 3.
4 DESIGNING NEURAL NETWORK ARCHITECTURES WITH Q-LEARNING
We consider the task of training a learning agent to sequentially choose neural network layers. Figure 2 shows feasible state and action spaces (a) and a potential trajectory the agent may take along with the CNN architecture defined by this trajectory (b). We model the layer selection process as a Markov Decision Process with the assumption that a well-performing layer in one network should
also perform well in another network. We make this assumption based on the hierarchical nature of the feature representations learned by neural networks with many hidden layers (LeCun et al., 2015). The agent sequentially selects layers via the -greedy strategy until it reaches a termination state. The CNN architecture defined by the agent’s path is trained on the chosen learning problem, and the agent is given a reward equal to the validation accuracy. The validation accuracy and architecture description are stored in a replay memory, and experiences are sampled periodically from the replay memory to update Q-values via Equation 3. The agent follows an schedule which determines its shift from exploration to exploitation.
Our method requires three main design choices: (i) reducing CNN layer definitions to simple state tuples, (ii) defining a set of actions the agent may take, i.e., the set of layers the agent may pick next given its current state, and (iii) balancing the size of the state-action space—and correspondingly, the model capacity—with the amount of exploration needed by the agent to converge. We now describe the design choices and the learning process in detail.
4.1 THE STATE SPACE
Each state is defined as a tuple of all relevant layer parameters. We allow five different types of layers: convolution (C), pooling (P), fully connected (FC), global average pooling (GAP), and softmax (SM), though the general method is not limited to this set. Table 1 shows the relevant parameters for each layer type and also the discretization we chose for each parameter. Each layer has a parameter layer depth (shown as Layer 1, 2, ... in Figure 2). Adding layer depth to the state space allows us to constrict the action space such that the state-action graph is directed and acyclic (DAG) and also allows us to specify a maximum number of layers the agent may select before terminating.
Each layer type also has a parameter called representation size (R-size). Convolutional nets progressively compress the representation of the original signal through pooling and convolution. The presence of these layers in our state space may lead the agent on a trajectory where the intermediate signal representation gets reduced to a size that is too small for further processing. For example, five 2× 2 pooling layers each with stride 2 will reduce an image of initial size 32× 32 to size 1× 1. At this stage, further pooling, or convolution with receptive field size greater than 1, would be meaningless and degenerate. To avoid such scenarios, we add the R-size parameter to the state tuple s, which allows us to restrict actions from states with R-size n to those that have a receptive field size less than or equal to n. To further constrict the state space, we chose to bin the representation sizes into three discrete buckets. However, binning adds uncertainty to the state transitions: depending on the true underlying representation size, a pooling layer may or may not change the R-size bin. As a result, the action of pooling can lead to two different states, which we model as stochasticity in state transitions. Please see Figure A1 in appendix for an illustrated example.
4.2 THE ACTION SPACE
We restrict the agent from taking certain actions to both limit the state-action space and make learning tractable. First, we allow the agent to terminate a path at any point, i.e. it may choose a termination state from any non-termination state. In addition, we only allow transitions for a state with layer depth i to a state with layer depth i + 1, which ensures that there are no loops in the graph. This constraint ensures that the state-action graph is always a DAG. Any state at the maximum layer depth, as prescribed in Table 1, may only transition to a termination layer.
Next, we limit the number of fully connected (FC) layers to be at maximum two, because a large number of FC layers can lead to too may learnable parameters. The agent at a state with type FC may transition to another state with type FC if and only if the number of consecutive FC states is less than the maximum allowed. Furthermore, a state s of type FC with number of neurons d may only transition to either a termination state or a state s′ of type FC with number of neurons d′ ≤ d. An agent at a state of type convolution (C) may transition to a state with any other layer type. An agent at a state with layer type pooling (P) may transition to a state with any other layer type other than another P state because consecutive pooling layers are equivalent to a single, larger pooling layer which could lie outside of our chosen state space. Furthermore, only states with representation size in bins (8, 4] and (4, 1] may transition to an FC layer, which ensures that the number of weights does not become unreasonably huge. Note that a majority of these constraints are in place to enable faster convergence on our limited hardware (see Section 5) and not a limitation of the method in itself.
4.3 Q-LEARNING TRAINING PROCEDURE
For the iterativeQ-learning updates (Equation 3), we set theQ-learning rate (α) to 0.01. In addition, we set the discount factor (γ) to 1 to not over-prioritize short-term rewards. We decrease from 1.0 to 0.1 in steps, where the step-size is defined by the number of unique models trained (Table 2). At = 1.0, the agent samples CNN architecture with a random walk along a uniformly weighted Markov chain. Every topology sampled by the agent is trained using the procedure described in Section 5, and the prediction performance of this network topology on the validation set is recorded. We train a larger number of models at = 1.0 as compared to other values of to ensure that the agent has adequate time to explore before it begins to exploit. We stop the agent at = 0.1 (and not at = 0) to obtain a stochastic final policy, which generates perturbations of the global minimum.2 Ideally, we want to identify several well-performing model topologies, which can then be ensembled to improve prediction performance.
During the entire training process (starting at = 1.0), we maintain a replay dictionary which stores (i) the network topology and (ii) prediction performance on a validation set, for all of the sampled
2 = 0 indicates a completely deterministic policy. Because we would like to generate several good models for ensembling and analysis, we stop at = 0.1, which represents a stochastic final policy.
models. If a model that has already been trained is re-sampled, it is not re-trained, but instead the previously found validation accuracy is presented to the agent. After each model is sampled and trained, the agent randomly samples 100 models from the replay dictionary and applies the Q-value update defined in Equation 3 for all transitions in each sampled sequence. The Q-value update is applied to the transitions in temporally reversed order, which has been shown to speed up Q-values convergence (Lin, 1993).
5 EXPERIMENT DETAILS
During the model exploration phase, we trained each network topology with a quick and aggressive training scheme. For each experiment, we created a validation set by randomly taking 5,000 samples from the training set such that the resulting class distributions were unchanged. For every network, a dropout layer was added after every two layers. The ith dropout layer, out of a total n dropout layers, had a dropout probability of i2n . Each model was trained for a total of 20 epochs with the Adam optimizer (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.999, ε = 10−8. The batch size was set to 128, and the initial learning rate was set to 0.001. If the model failed to perform better than a random predictor after the first epoch, we reduced the learning rate by a factor of 0.4 and restarted training, for a maximum of 5 restarts. For models that started learning (i.e., performed better than a random predictor), we reduced the learning rate by a factor of 0.2 every 5 epochs. All weights were initialized with Xavier initialization (Glorot & Bengio, 2010). Our experiments using Caffe (Jia et al., 2014) took 8-10 days to complete for each dataset with a hardware setup consisting of 10 NVIDIA GPUs.
After the agent completed the schedule (Table 2), we selected the top ten models that were found over the course of exploration. These models were then finetuned using a much longer training schedule, and only the top five were used for ensembling. We now provide details of the datasets and the finetuning process.
The Street View House Numbers (SVHN) dataset has 10 classes with a total of 73,257 samples in the original training set, 26,032 samples in the test set, and 531,131 additional samples in the extended training set. During the exploration phase, we only trained with the original training set, using 5,000 random samples as validation. We finetuned the top ten models with the original plus extended training set, by creating preprocessed training and validation sets as described by Lee et al. (2016). Our final learning rate schedule after tuning on validation set was 0.025 for 5 epochs, 0.0125 for 5 epochs, 0.0001 for 20 epochs, and 0.00001 for 10 epochs.
CIFAR-10, the 10 class tiny image dataset, has 50,000 training samples and 10,000 testing samples. During the exploration phase, we took 5,000 random samples from the training set for validation. The maximum layer depth was increased to 18. After the experiment completed, we used the same validation set to tune hyperparameters, resulting in a final training scheme which we ran on the entire training set. In the final training scheme, we set a learning rate of 0.025 for 40 epochs, 0.0125 for 40 epochs, 0.0001 for 160 epochs, and 0.00001 for 60 epochs, with all other parameters unchanged. During this phase, we preprocess using global contrast normalization and use moderate data augmentation, which consists of random mirroring and random translation by up to 5 pixels.
MNIST, the 10 class handwritten digits dataset, has 60,000 training samples and 10,000 testing samples. We preprocessed each image with global mean subtraction. In the final training scheme, we trained each model for 40 epochs and decreased learning rate every 5 epochs by a factor of 0.2. For further tuning details please see Appendix C.
6 RESULTS
Model Selection Analysis: From Q-learning principles, we expect the learning agent to improve in its ability to pick network topologies as reduces and the agent enters the exploitation phase. In
Figure 3, we plot the rolling mean of prediction accuracy over 100 models and the mean accuracy of models sampled at different values, for the CIFAR-10 and SVHN experiments. The plots show that, while the prediction accuracy remains flat during the exploration phase ( = 1) as expected, the agent consistently improves in its ability to pick better-performing models as reduces from 1 to 0.1. For example, the mean accuracy of models in the SVHN experiment increases from 52.25% at = 1 to 88.02% at = 0.1. Furthermore, we demonstrate the stability of the Q-learning procedure with 10 independent runs on a subset of the SVHN dataset in Section D.1 of the Appendix. Additional analysis of Q-learning results can be found in Section D.2.
The top models selected by the Q-learning agent vary in the number of parameters but all demonstrate high performance (see Appendix Tables 1-3). For example, the number of parameters for the top five CIFAR-10 models range from 11.26 million to 1.10 million, with only a 2.32% decrease in test error. We find design motifs common to the top hand-crafted network architectures as well. For example, the agent often chooses a layer of type C(N, 1, 1) as the first layer in the network. These layers generate N learnable linear transformations of the input data, which is similar in spirit to preprocessing of input data from RGB to a different color spaces such as YUV, as found in prior work (Sermanet et al., 2012; 2013).
Prediction Performance: We compare the prediction performance of the MetaQNN networks discovered by theQ-learning agent with state-of-the-art methods on three datasets. We report the accuracy of our best model, along with an ensemble of top five models. First, we compare MetaQNN with six existing architectures that are designed with standard convolution, pooling, and fully-connected layers alone, similar to our designs. As seen in Table 3, our top model alone, as well as the committee ensemble of five models, outperforms all similar models. Next, we compare our results with six top networks overall, which contain complex layer types and design ideas, including generalized pooling functions, residual connections, and recurrent modules. Our results are competitive with these methods as well (Table 4). Finally, our method outperforms existing automated network de-
sign methods. MetaQNN obtains an error of 6.92% as compared to 21.2% reported by Bergstra et al. (2011) on CIFAR-10; and it obtains an error of 0.32% as compared to 7.9% reported by Verbancsics & Harguess (2013) on MNIST.
The difference in validation error between the top 10 models for MNIST was very small, so we also created an ensemble with all 10 models. This ensemble achieved a test error of 0.28%—which beats the current state-of-the-art on MNIST without data augmentation.
The best CIFAR-10 model performs 1-2% better than the four next best models, which is why the ensemble accuracy is lower than the best model’s accuracy. We posit that the CIFAR-10 MetaQNN did not have adequate exploration time given the larger state space compared to that of the SVHN experiment, causing it to not find more models with performance similar to the best model. Furthermore, the coarse training scheme could have been not as well suited for CIFAR-10 as it was for SVHN, causing some models to under perform.
Transfer Learning Ability: Network designs such as VGGnet (Simonyan & Zisserman, 2014) can be adopted to solve a variety of computer vision problems. To check if the MetaQNN networks provide similar transfer learning ability, we use the best MetaQNN model on the CIFAR-10 dataset for training other computer vision tasks. The model performs well (Table 5) both when training from random initializations, and finetuning from existing weights.
7 CONCLUDING REMARKS
Neural networks are being used in an increasingly wide variety of domains, which calls for scalable solutions to produce problem-specific model architectures. We take a step towards this goal and show that a meta-modeling approach using reinforcement learning is able to generate tailored CNN designs for different image classification tasks. Our MetaQNN networks outperform previous metamodeling methods as well as hand-crafted networks which use the same types of layers.
While we report results for image classification problems, our method could be applied to different problem settings, including supervised (e.g., classification, regression) and unsupervised (e.g., autoencoders). The MetaQNN method could also aid constraint-based network design, by optimizing parameters such as size, speed, and accuracy. For instance, one could add a threshold in the state-action space barring the agent from creating models larger than the desired limit. In addition,
∗Results in this column obtained with the top MetaQNN architecture for CIFAR-10, trained from random initialization with CIFAR-100 data.
one could modify the reward function to penalize large models for constraining memory or penalize slow forward passes to incentivize quick inference.
There are several future avenues for research in reinforcement learning-driven network design as well. In our current implementation, we use the same set of hyperparameters to train all network topologies during the Q-learning phase and further finetune the hyperparameters for top models selected by the MetaQNN agent. However, our approach could be combined with hyperparameter optimization methods to further automate the network design process. Moreover, we constrict the state-action space using coarse, discrete bins to accelerate convergence. It would be possible to move to larger state-action spaces using methods for Q-function approximation (Bertsekas, 2015; Mnih et al., 2015).
ACKNOWLEDGMENTS
We thank Peter Downs for creating the project website and contributing to illustrations. We acknowledge Center for Bits and Atoms at MIT for their help with computing resources. Finally, we thank members of Camera Culture group at MIT Media Lab for their help and support.
A ALGORITHM
We first describe the main components of the MetaQNN algorithm. Algorithm 1 shows the main loop, where the parameter M would determine how many models to run for a given and the parameter K would determine how many times to sample the replay database to update Q-values on each iteration. The function TRAIN refers to training the specified network and returns a validation accuracy. Algorithm 2 details the method for sampling a new network using the -greedy strategy, where we assume we have a function TRANSITION that returns the next state given a state and action. Finally, Algorithm 3 implements theQ-value update detailed in Equation 3, with discounting factor set to 1, for an entire state sequence in temporally reversed order.
Algorithm 1 Q-learning For CNN Topologies Initialize:
replay memory← [ ] Q← {(s, u) ∀s ∈ S, u ∈ U(s) : 0.5}
for episode = 1 to M do S, U ← SAMPLE NEW NETWORK( , Q) accuracy← TRAIN(S) replay memory.append((S, U, accuracy)) for memory = 1 to K do
SSAMPLE , USAMPLE , accuracySAMPLE ← Uniform{replay memory} Q← UPDATE Q VALUES(Q, SSAMPLE , USAMPLE , accuracySAMPLE)
end for end for
Algorithm 2 SAMPLE NEW NETWORK( , Q) Initialize:
state sequence S = [sSTART] action sequence U = [ ]
while U [−1] 6= terminate do α ∼ Uniform[0, 1) if α > then
u = argmaxu∈U(S[−1])Q[(S[−1], u)] s′ = TRANSITION(S[−1], u)
else u ∼ Uniform{U(S[−1])} s′ = TRANSITION(S[−1], u) end if U.append(u) if u != terminate then
S.append(s′) end if
end while return S, U
Algorithm 3 UPDATE Q VALUES(Q, S, U , accuracy) Q[S[−1], U [−1]] = (1− α)Q[S[−1], U [−1]] + α · accuracy for i = length(S)− 2 to 0 do
Q[S[i], U [i]] = (1− α)Q[S[i], U [i]] + αmaxu∈U(S[i+1])Q[S[i+ 1], u] end for return Q
B REPRESENTATION SIZE BINNING
As mentioned in Section 4.1 of the main text, we introduce a parameter called representation size to prohibit the agent from taking actions that can reduce the intermediate signal representation to a size that is too small for further processing. However, this process leads to uncertainties in state transitions, as illustrated in Figure A1, which is handled by the standard Q-learning formulation.
P(2,2)
R-size: 18 R-size bin: 1
R-size: 9 R-size bin: 1
(a)
P(2,2)
R-size: 7 R-size bin: 2
R-size: 14 R-size bin: 1
(b)
States Actions
p 1 2 p
R-size bin: 1
R-size bin: 1 R-size bin: 2
P(2,2)
(c)
Figure A1: Representation size binning: In this figure, we show three example state transitions. The true representation size (R-size) parameter is included in the figure to show the true underlying state. Assuming there are two R-size bins, R-size Bin1: [8,∞) and R-size Bin2: (0, 7], Figure A1a shows the case where the initial state is in R-size Bin1 and true representation size is 18. After the agent chooses to pool with a 2×2 filter with stride 2, the true representation size reduces to 9 but the R-size bin does not change. In Figure A1b, the same 2 × 2 pooling layer with stride 2 reduces the actual representation size of 14 to 7, but the bin changes to R-size Bin2. Therefore, in figures A1a and A1b, the agent ends up in different final states, despite originating in the same initial state and choosing the same action. Figure A1c shows that in our state-action space, when the agent takes an action that reduces the representation size, it will have uncertainty in which state it will transition to.
C MNIST EXPERIMENT
We noticed that the final MNIST models were prone to overfitting, so we increased dropout and did a small grid search for the weight regularization parameter. For both tuning and final training, we warmed the model with the learned weights from after the first epoch of initial training. The final models and solvers can be found on our project website https://bowenbaker.github.io/metaqnn/ . Figure A2 shows the Q-Learning performance for the MNIST experiment.
D FURTHER ANALYSIS OF Q-LEARNING
Figure 3 of the main text and Figure A2 show that as the agent begins to exploit, it improves in architecture selection. It is also informative to look at the distribution of models chosen at each . Figure A4 gives further insight into the performance achieved at each for both experiments.
D.1 Q-LEARNING STABILITY
Because the Q-learning agent explores via a random or semi-random distribution, it is natural to ask whether the agent can consistently improve architecture performance. While the success of the three independent experiments described in the main text allude to stability, here we present further evidence. We conduct 10 independent runs of the Q-learning procedure on 10% of the SVHN dataset (which corresponds to ∼7,000 training examples). We use a smaller dataset to reduce the computation time of each independent run to 10GPU-days, as opposed to the 100GPU-days it would take on the full dataset. As can be seen in Figure A3, the Q-learning procedure with the exploration schedule detailed in Table 2 is fairly stable. The standard deviation at = 1 is notably smaller than at other stages, which we attribute to the large difference in number of samples at each stage.
0 500 1000 1500 2000 2500 3000 3500 Iterations
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
A cc
u ra
cy
Epsilon = 1.0 .9.8.7 .6 .5 .4 .3 .2 .1
MNIST Q-Learning Performance
Average Accuracy Per Epsilon Rolling Mean Model Accuracy
Figure A2: MNIST Q-Learning Performance. The blue line shows a rolling mean of model accuracy versus iteration, where in each iteration of the algorithm the agent is sampling a model. Each bar (in light blue) marks the average accuracy over all models that were sampled during the exploration phase with the labeled . As decreases, the average accuracy goes up, demonstrating that the agent learns to select better-performing CNN architectures.
0.10.20.30.40.50.60.70.80.91.0 Epsilon
0.45
0.50
0.55
0.60
0.65
0.70
0.75
0.80
M ea
n Ac
cu ra
cy
Q-Learning Stability (Across 10 Runs)
(a)
0.10.20.30.40.50.60.70.80.91.0 Epsilon
0.45
0.50
0.55
0.60
0.65
0.70
0.75 0.80 M ea n Ac cu ra cy
Q-Learning Individual Runs
(b)
Figure A3: Figure A3a shows the mean model accuracy and standard deviation at each over 10 independent runs of the Q-learning procedure on 10% of the SVHN dataset. Figure A3b shows the mean model accuracy at each for each independent experiment. Despite some variance due to a randomized exploration strategy, each independent run successfully improves architecture performance.
Furthermore, the best model found during each run had remarkably similar performance with a mean accuracy of 88.25% and standard deviation of 0.58%, which shows that each run successfully found at least one very high performing model. Note that we did not use an extended training schedule to improve performance in this experiment.
D.2 Q-VALUE ANALYSIS
We now analyze the actualQ-values generated by the agent during the training process. The learning agent iteratively updates the Q-values of each path during the -greedy exploration. Each Q-value is initialized at 0.5. After the -schedule is complete, we can analyze the final Q-value associated with each path to gain insights into the layer selection process. In the left column of Figure A5, we plot the average Q-value for each layer type at different layer depths (for both SVHN and CIFAR10) datasets. Roughly speaking, a higher Q-value associated with a layer type indicates a higher probability that the agent will pick that layer type. In Figure A5, we observe that, while the average Q-value is higher for convolution and pooling layers at lower layer depths, the Q-values for fullyconnected and termination layers (softmax and global average pooling) increase as we go deeper into the network. This observation matches with traditional network designs.
We can also plot the averageQ-values associated with different layer parameters for further analysis. In the right column of Figure A5, we plot the averageQ-values for convolution layers with receptive
field sizes 1, 3, and 5 at different layer depths. The plots show that layers with receptive field size of 5 have a higher Q-value as compared to sizes 1 and 3 as we go deeper into the networks. This indicates that it might be beneficial to use larger receptive field sizes in deeper networks.
In summary, the Q-learning method enables us to perform analysis on the relative benefits of different design parameters of our state space, and possibly gain insights for new CNN designs.
E TOP TOPOLOGIES SELECTED BY ALGORITHM
In Tables A1 through A3, we present the top five model architectures selected with Q-learning for each dataset, along with their prediction error reported on the test set, and their total number of parameters. To download the Caffe solver and prototext files, please visit https://bowenbaker.github.io/metaqnn/ .
Model Architecture Test Error (%) # Params (106) [C(512,5,1), C(256,3,1), C(256,5,1), C(256,3,1), P(5,3), C(512,3,1), C(512,5,1), P(2,2), SM(10)] 6.92 11.18 [C(128,1,1), C(512,3,1), C(64,1,1), C(128,3,1), P(2,2), C(256,3,1), P(2,2), C(512,3,1), P(3,2), SM(10)] 8.78 2.17 [C(128,3,1), C(128,1,1), C(512,5,1), P(2,2), C(128,3,1), P(2,2), C(64,3,1), C(64,5,1), SM(10)] 8.88 2.42 [C(256,3,1), C(256,3,1), P(5,3), C(256,1,1), C(128,3,1), P(2,2), C(128,3,1), SM(10)] 9.24 1.10 [C(128,5,1), C(512,3,1), P(2,2), C(128,1,1), C(128,5,1), P(3,2), C(512,3,1), SM(10)] 11.63 1.66
Table A1: Top 5 model architectures: CIFAR-10.
Model Architecture Test Error (%) # Params (106) [C(128,3,1), P(2,2), C(64,5,1), C(512,5,1), C(256,3,1), C(512,3,1), P(2,2), C(512,3,1), C(256,5,1), C(256,3,1), C(128,5,1), C(64,3,1), SM(10)] 2.24 9.81 [C(128,1,1), C(256,5,1), C(128,5,1), P(2,2), C(256,5,1), C(256,1,1), C(256,3,1), C(256,3,1), C(256,5,1), C(512,5,1), C(256,3,1), C(128,3,1), SM(10)] 2.28 10.38 [C(128,5,1), C(128,3,1), C(64,5,1), P(5,3), C(128,3,1), C(512,5,1), C(256,5,1), C(128,5,1), C(128,5,1), C(128,3,1), SM(10)] 2.32 6.83 [C(128,1,1), C(256,5,1), C(128,5,1), C(256,3,1), C(256,5,1), P(2,2), C(128,1,1), C(512,3,1), C(256,5,1), P(2,2), C(64,5,1), C(64,1,1), SM(10)] 2.35 6.99 [C(128,1,1), C(256,5,1), C(128,5,1), C(256,5,1), C(256,5,1), C(256,1,1), P(3,2), C(128,1,1), C(256,5,1), C(512,5,1), C(256,3,1), C(128,3,1), SM(10)] 2.36 10.05
Table A2: Top 5 model architectures: SVHN. Note that we do not report the best accuracy on test set from the above models in Tables 3 and 4 from the main text. This is because the model that achieved 2.28% on the test set performed the best on the validation set.
Model Architecture Test Error (%) # Params (106) [C(64,1,1), C(256,3,1), P(2,2), C(512,3,1), C(256,1,1), P(5,3), C(256,3,1), C(512,3,1), FC(512), SM(10)] 0.35 5.59 [C(128,3,1), C(64,1,1), C(64,3,1), C(64,5,1), P(2,2), C(128,3,1), P(3,2), C(512,3,1), FC(512), FC(128), SM(10)] 0.38 7.43 [C(512,1,1), C(128,3,1), C(128,5,1), C(64,1,1), C(256,5,1), C(64,1,1), P(5,3), C(512,1,1), C(512,3,1), C(256,3,1), C(256,5,1), C(256,5,1), SM(10)] 0.40 8.28 [C(64,3,1), C(128,3,1), C(512,1,1), C(256,1,1), C(256,5,1), C(128,3,1), P(5,3), C(512,1,1), C(512,3,1), C(128,5,1), SM(10)] 0.41 6.27 [C(64,3,1), C(128,1,1), P(2,2), C(256,3,1), C(128,5,1), C(64,1,1), C(512,5,1), C(128,5,1), C(64,1,1), C(512,5,1), C(256,5,1), C(64,5,1), SM(10)] 0.43 8.10 [C(64,1,1), C(256,5,1), C(256,5,1), C(512,1,1), C(64,3,1), P(5,3), C(256,5,1), C(256,5,1), C(512,5,1), C(64,1,1), C(128,5,1), C(512,5,1), SM(10)] 0.44 9.67 [C(128,3,1), C(512,3,1), P(2,2), C(256,3,1), C(128,5,1), C(64,1,1), C(64,5,1), C(512,5,1), GAP(10), SM(10)] 0.44 3.52 [C(256,3,1), C(256,5,1), C(512,3,1), C(256,5,1), C(512,1,1), P(5,3), C(256,3,1), C(64,3,1), C(256,5,1), C(512,3,1), C(128,5,1), C(512,5,1), SM(10)] 0.46 12.42 [C(512,5,1), C(128,5,1), C(128,5,1), C(128,3,1), C(256,3,1), C(512,5,1), C(256,3,1), C(128,3,1), SM(10)] 0.55 7.25 [C(64,5,1), C(512,5,1), P(3,2), C(256,5,1), C(256,3,1), C(256,3,1), C(128,1,1), C(256,3,1), C(256,5,1), C(64,1,1), C(256,3,1), C(64,3,1), SM(10)] 0.56 7.55
Table A3: Top 10 model architectures: MNIST. We report the top 10 models for MNIST because we included all 10 in our final ensemble. Note that we do not report the best accuracy on test set from the above models in Tables 3 and 4 from the main text. This is because the model that achieved 0.44% on the test set performed the best on the validation set.
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Validation Accuracy
0
10
20
30
40
50
60
% M
o d e ls
Model Accuracy Distribution (SVHN)
epsilon
0.1 0.2 0.3 0.4 0.5
0.6 0.7 0.8 0.9 1.0
(a)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Validation Accuracy
0
10
20
30
40
50
60
% M
o d e ls
Model Accuracy Distribution (SVHN)
epsilon
0.1 1.0
(b)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Validation Accuracy
0
5
10
15
20
% M
o d e ls
Model Accuracy Distribution (CIFAR-10)
epsilon
0.1 0.2 0.3 0.4 0.5
0.6 0.7 0.8 0.9 1.0
(c)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Validation Accuracy
0
5
10
15 20 % M o d e ls
Model Accuracy Distribution (CIFAR-10)
epsilon
0.1 1.0
(d)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Validation Accuracy
0
20
40
60
80
100
% M
od el
s
Model Accuracy Distribution (MNIST)
epsilon 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
(e)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Validation Accuracy
0
20
40
60
80
100
% M
od el
s
Model Accuracy Distribution (MNIST)
epsilon 0.1 1.0
(f)
Figure A4: Accuracy Distribution versus : Figures A4a, A4c, and A4e show the accuracy distribution for each for the SVHN, CIFAR-10, and MNIST experiments, respectively. Figures A4b, A4d, and A4f show the accuracy distributions for the initial = 1 and the final = 0.1. One can see that the accuracy distribution becomes much more peaked in the high accuracy ranges at small for each experiment.
0 2 4 6 8 10 12 14 Layer Depth
0.0
0.2
0.4
0.6
0.8
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth (SVHN)
Convolution Fully Connected Pooling Global Average Pooling Softmax
(a)
0 2 4 6 8 10 12 Layer Depth
0.5
0.6
0.7
0.8
0.9
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth for Convolution Layers (SVHN)
Receptive Field Size 1 Receptive Field Size 3 Receptive Field Size 5
(b)
0 5 10 15 20 Layer Depth
0.0
0.2
0.4
0.6
0.8
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth (CIFAR10)
Convolution Fully Connected Pooling Global Average Pooling Softmax
(c)
0 2 4 6 8 10 12 14 16 18 Layer Depth
0.5
0.6
0.7
0.8
0.9 1.0 A ve ra ge Q -V al ue
Average Q-Value vs. Layer Depth for Convolution Layers (CIFAR10)
Receptive Field Size 1 Receptive Field Size 3 Receptive Field Size 5
(d)
0 2 4 6 8 10 12 14 Layer Depth
0.0
0.2
0.4
0.6
0.8
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth (MNIST)
Convolution Fully Connected Pooling Global Average Pooling Softmax
(e)
0 2 4 6 8 10 12 Layer Depth
0.5
0.6
0.7
0.8
0.9
1.0
A ve
ra ge
Q -V
al ue
Average Q-Value vs. Layer Depth for Convolution Layers (MNIST)
Receptive Field Size 1 Receptive Field Size 3 Receptive Field Size 5
(f)
Figure A5: Average Q-Value versus Layer Depth for different layer types are shown in the left column. Average Q-Value versus Layer Depth for different receptive field sizes of the convolution layer are shown in the right column. | 1. How effective is the proposed method in addressing complex tasks?
2. What are the limitations of the current approach in terms of scalability?
3. How does the performance of the proposed method compare to state-of-the-art models on large-scale datasets like ImageNet?
4. Are there any concerns or potential drawbacks in applying Q-learning to deep architecture learning? | Review | Review
Authors learn deep architectures on a few small vision problems using Q-learning and obtain solid results, SOTA results when limiting to certain types of layers and competitive against everything else. It would be good to know how well this performs when allowing more complex structures. Paper would be much more convincing on a real-size task such as ImageNet. |
ICLR | Title
Single-Stage Open-world Instance Segmentation with Cross-task Consistency Regularization
Abstract
Open-World Instance Segmentation (OWIS) is an emerging research topic that aims to segment class-agnostic object instances from images. The mainstream approaches use a two-stage segmentation framework, which first locates the candidate object bounding boxes and then performs instance segmentation. In this work, we instead promote a single-stage transformer-based framework for OWIS. We argue that the end-to-end training process in the single-stage framework can be more convenient for directly regularizing the localization of class-agnostic object pixels. Based on the transformer-based instance segmentation framework, we propose a regularization model to predict foreground pixels and use its relation to instance segmentation to construct a cross-task consistency loss. We show that such a consistency loss could alleviate the problem of incomplete instance annotation – a common problem in the existing OWIS datasets. We also show that the proposed loss lends itself to an effective solution to semi-supervised OWIS that could be considered an extreme case that all object annotations are absent for some images. Our extensive experiments demonstrate that the proposed method achieves impressive results in both fully-supervised and semi-supervised settings. Compared to SOTA methods, the proposed method significantly improves the AP100 score by 4.75% in UVO dataset →UVO dataset setting and 4.05% in COCO dataset →UVO dataset setting. In the case of semi-supervised learning, our model learned with only 30% labeled data, even outperforms its fully-supervised counterpart with 50% labeled data. The code will be released soon.
1 INTRODUCTION
Traditional instance segmentation Lin et al. (2014); Cordts et al. (2016) methods often assume that objects in images can be categorized into a finite set of predefined classes (i.e., closed-world). Such an assumption, however, can be easily violated in many real-world applications, where models will encounter many new object classes that never appeared in the training data. Therefore, researchers recently attempted to tackle the problem of Open-World Instance Segmentation (OWIS) Wang et al. (2021), which targets class-agnostic segmentation of all objects in the image.
Prior to this paper, most existing methods for OWIS are of two-stage Wang et al. (2022); Saito et al. (2021), which detect bounding boxes of objects and then segment them. Despite their promising performances, such a paradigm cannot handle and recover if object bounding boxes are not detected. In contrast, a transformer-based approach called Mask2Former Cheng et al. (2022) has recently been introduced, yet only for closed-world instance segmentation. Based on the Mask2Former, we propose a Transformer-based Open-world Instance Segmentation method named TOIS.
Note that our work is not just a straightforward adaptation of Mask2Former from close-world to open-world. This is because unlike closed-world segmentation, where the object categories can be clearly defined before annotation, the open-world scenario makes it challenging for annotators to label all instances completely or ensure annotation consistency across different images because they cannot have a well-defined finite set of object categories. As shown in Figure 1(a), annotators miss some instances. It still remains challenging that how to handle such incomplete annotations (i.e. some instances missed).
Recent work LDET Saito et al. (2021) addresses this problem by generating synthetic data with a plain background, but based on a decoupled training strategy that can only be used in the two-stage method while our method is of single-stage. Another work called GGN Wang et al. (2022) handles such incomplete instance-level annotation issue by training a pairwise affinity predictor for generating pseudo labels. But training such an additional predictor is complicated and time-consuming.
In contrast, our proposed TOIS method is end-to-end and simpler. We address this incomplete annotation issue via a novel regularization module, which is simple yet effective. Specifically, it is convenient to concurrently predict not only (1) instance masks but also a (2) foreground map. Ideally, as shown in Figure 1(b), the foreground region should be consistent with the union of all instance masks. To penalize their inconsistency, we devise a cross-task consistency loss, which can down-weight the adverse effects caused by incomplete annotation. This is because when an instance is missed in annotation, as long as it is captured by both our predictions of instance masks and foreground map, the consistency loss would be low and hence encourage such prediction. Experiments in Figure 1(g) show that such consistency loss is effective even when annotations miss many instances. And as in Figure 1(c), novel objects which are unannotated in training set have been segmented successfully by our method.
So far, like most existing methods, we focus on the fully-supervised OWIS. In this paper, we further extend OWIS to the semi-supervised setting, where some training images do not have any annotations at all. This is of great interest because annotating segmentation map is very costly. Notably, our proposed regularization module can also benefit semi-supervised OWIS – consider an unlabeled image as an extreme case of incomplete annotation where all of the instance annotations are missed. Specifically, we perform semi-supervised OWIS by first warming up the network on the labeled set and then continuing training it with the cross-task consistency loss on the mixture of labeled and unlabeled images.
Contributions. In a nutshell, our main contributions could be summarized as:
1. Building upon a recently-proposed close-world segmentation method of Mask2Former, we propose a Transformer-based Open-world Instance Segmentation (TOIS) method.
2. We propose a novel cross-task consistency loss that mitigates the issue of incomplete mask annotations, which is a critical issue for open-world segmentation in particular.
3. We further extend the proposed method into a semi-supervised OWIS model, which effectively makes use of the unlabeled images to help the OWIS model training .
4. Our extensive experiments demonstrate that the proposed method reaches the leading OWIS performance in the fully-supervised learning. (Figure 1(d-f)), and that our semi-supervised extension can achieve remarkable performance with a much smaller amount of labeled data.
2 RELATED WORK
Closed-world instance segmentation (CWIS) He et al. (2017); Chen et al. (2020); Dai et al. (2016); Bolya et al. (2019); Wang et al. (2020a) requires the approaches to assign a class label and instance ID to every pixel. Two-stage CWIS approaches, such as MaskRCNN, always include a bounding box estimation branch and a FCN-based mask segmentation branch, working in a ’detectthen-segment’ way. To improve efficiency, one-stage methods such as CenterMask Dai et al. (2016), YOLACT Bolya et al. (2019) and BlendMask Chen et al. (2020) have been proposed, which remove the proposal generation and feature grouping process. To further free the CWIS from the local box detection, Wang et.al Wang et al. (2020a) proposed SOLO and obtained on par results to the above methods. In recent years, the methods Fang et al. (2021); Dong et al. (2021), following DETR Carion et al. (2020), consider the instance segmentation task as an ensemble prediction problem. In addition, Cheng et al. proposed an universal segmentation framework MaskFormer Cheng et al. (2021) and its upgrade version Mask2Former Cheng et al. (2022), which even outperforms the state-of-the-art architectures specifically designed for the CWIS task.
Notably, two-stage method CenterMask preserves pixel alignment and separates the object simultaneously by integrating the local and global branch. Although introducing the global information in this way helps improve the mask quality in CWIS, it cannot handle the open-world task very well, because CenterMask multiplies the local shape and the cropped saliency map to form the final mask for each instance. There is no separate loss for the local shape and global saliency. When such a method faces the incomplete annotations in OWIS tasks, the generated mask predictions corresponding to the unlabeled instances would still be punished during training, making it difficult to discover novel objects at inference. The efficient way to jointly take advantages of global and local information in OWIS tasks deserves to be explored.
Open-world instance segmentation OWIS task Wang et al. (2021) here focuses on the following aspects: (1) All instances (without stuff) have to be segmented; (2) Class-agnostic pixel-level results should be predicted with only instance ID and incremental learning ability is unnecessary. Several OWIS works have recently been developed. Yu et al. Du et al. (2021) proposed a twostage segmentation algorithm, which decoupled the segmentation and detection modules during training and testing. This algorithm achieves competitive results on the UVO dataset thanks to the abundant training data and the introduction of effective modules such as cascade RPN Vu et al. (2019), SimOTA Ge et al. (2021), etc. Another work named LDET Saito et al. (2021) attempts to solve the instance-level incomplete annotation problem. Specifically, LDET first generates the background of the synthesized image by taking a small piece of background in the original image and enlarging it to the same size as the original image. The instance is then matted to the foreground of the synthesized image. The synthesized data is used only to train the mask prediction branch, and the rest of the branches are still trained with the original data. Meanwhile, Wang et al. proposed GGN Wang et al. (2022), an algorithm that combines top-down and bottom-up segmentation ideas to improve prediction accuracy by generating high-quality pseudo-labels. Specifically, a Pairwise Affinity (PA) predictor is trained first, and a grouping module is used to extract and rank segments from predicted PA to generate pseudo-labels, which would be fused with groundtruth to train the segmentation model.
3 METHODOLOGY
In this section, we first define the OWIS problem with both fully and semi-supervised learning. Then the architecture of our TOIS and the proposed cross-task consistency loss are introduced in Section
3.3 and 3.4, respectively. Finally, Section 3.4 and 3.5 show how to optimize the TOIS in fully and semi-supervised way, respectively.
3.1 PROBLEM DEFINITION OF OWIS
The open-world instance segmentation (OWIS) aims to segment all the object instances (things) of any class including those that did not appear in the training phase. Technically, OWIS is a task to produce a set of binary masks, where each mask corresponds to a class-agnostic instance. The pixel value of 1 in the mask indicates a part of an object instance while 0 indicates not.
3.2 MODEL ARCHITECTURE
Our proposed TOIS framework consists of three branches to alleviate the incomplete annotating, as shown in Figure 2. Basically, we follow the design of one-stage Mask2Former Cheng et al. (2022). The objectness prediction branch estimates the weighting score for each mask by applying a sequential Transformer decoder and MLP. The mask prediction branch predicts the binary mask for each instance. It first generates N binary masks with N ideally larger than the actual instance number Ki, which is the number of annotated object instances in a given image i. Each mask is multiplied by a weighting score with a value between 0 and 1, indicating if a mask should be selected as an instance mask. This process generates the mask in an end to end way, which avoids to miss the instance because of poor detection bounding boxes and meanwhile reduces the redundant segmentation cost for each proposal. We refer to Cheng et al. (2021; 2022) for more details.
The foreground prediction branch is a light-weight fully convolutional network to estimate the foreground regions that belong to any object instance.The more detailed design of the foreground prediction branch is in the Appendix. This guides the training of the mask branch through our cross-task consistency loss proposed in the following Sec. 3.3. Once training is done, we discard this branch and only use the objectness and mask prediction branch at inference time. Therefore, we would not introduce any additional parameter or computational redundancy, which benefits the running efficiency.
3.3 LEARNING WITH THE CROSS-TASK CONSISTENCY REGULARIZATION
A critical limitation of the OWIS is the never-perfect annotations due to the difficulties in annotating class-agnostic object instances. Towards alleviating this issue, we propose a regularization to provide extra supervision to guide the OWIS model training under incomplete annotations.
We construct a branch to predict the foreground regions that belong to any of the object instances. Formally we create the the foreground annotation G(x, y) calculated by
G(x, y) =
{ 0, if ∑K i=1 g i(x, y) == 0
1, otherwise, (1)
where gi(x, y) is one of the K annotated object instances for the current image and the union of gi defines the foreground object regions. Here (x, y) denotes a coordinate of a pixel in the an image. We use G(x, y) as labels to train the foreground prediction branch.
Our consistency loss encourages the model outputs to have the relationship indicated in Eq 1, which states that the the foreground prediction should be the union of instance predictions. To do so, we use the following equation as an estimate of the foreground from the instance prediction:
Ĝ(x, y) = Φ K∑ j=1 mj(x, y) , (2) where mj means the confidence of pixels in j-th predicted mask, and Φ represents the Sigmoid function. Then, let the foreground prediction from the foreground prediction branch be F , our cross-task consistency loss is to make F and Ĝ(x, y) consistent, which finally leads to the following loss function.
Lc = DICE(Ĝ,F ) + BCE(Ĝ,F ), (3)
where DICE and BCE denote the dice-coefficient loss Milletari et al. (2016) and binary cross-entropy loss, respectively.
Consistency loss enjoys the following appealing properties. It is self-calibrated and independent with the incompleteness level of labels. As shown in Figure 3, for a instance mistakenly annotated as background, but the foreground prediction branch and mask prediction branch both correctly find it, the model would be punished through mask loss and foreground loss. However, the consistency loss thinks this prediction is correct. In this way, consistency loss down-weights the adverse effects caused by other unreliable segmentation losses. The mitigation and the compensation factor synergize to relieve the overwhelming punishments on unlabeled instances.
3.4 FULLY-SUPERVISED LEARNING
The overall fully-supervised optimization of the proposed TOIS is carried out by minimizing the following joint loss formulation Lf ,
Lf = αLm + βLp + γLc + ωLo, (4) where Lm = BCE(m,g) + DICE(m,g), (5)
Lp = BCE(F,G) + DICE(F,G), (6) Lo = BCE(s,v), (7)
where Lm, Lp and Lo denote the loss terms for mask prediction, foreground prediction, and objectness scoring, respectively. α, β, γ and ω are the weights of the corresponding losses. m and g represent the predicted masks and corresponding groundtruth, respectively. F and G is the foreground prediction result and the generated foreground groundtruth, while the estimated objectness score is denoted with s. v is a set of binary values that indicate whether each mask is an instance. Before computing the Lm, matching between the set of predicted masks and groundtruth has been done via the bipartite matching algorithm defined in Cheng et al. (2022).
3.5 EXTENSION TO SEMI-SUPERVISED LEARNING
Due to the ambiguity of the instance definition in OWIS, it is much harder for the annotators to follow the annotation instruction, and this could make the annotations for OWIS expensive. It is desirable if we can use unlabeled data to help train OWIS models. In this regard, our proposed cross-task consistency loss only requires the outputs of both predictors to have a consistent relationship indicated in Eq1, and does not always need ground truth annotations. Thus, we apply this loss to unlabeled data, which becomes semi-supervised learning. Specifically, the easier-to-learn foreground prediction branch is able to learn well through a few labeled images in the warm-up stage. Then the resulted foreground map can serve as a constraint to optimize the open-world mask predictions with the help of our cross-task consistency loss, when the labels do not exist. In this way, our Semi-TOIS achieves a good trade-off between the annotation cost and model accuracy.
Semi-supervised learning process. Given a labeled set Dl = {(xi, yi)}Nli=1 and an unlabeled set Du = {xi}Nui=1, our goal is to train an OWIS model by leveraging both a large amount of unlabeled data and a smaller set of labeled data. Specifically, we initially use Dl to train the TOIS as a warm-up stage, giving a good initialization for the model. We then jointly train the OWIS model on the both labeled and unlabeled data. For the labeled data, we employ the loss function defined in Eq 4. For the unlabeled data, we apply only the cross-task consistency loss Lc.
4 EXPERIMENTS
For demonstrating the effectiveness of our proposed TOIS, we compared it with other fully-supervised methods through intra-dataset and cross-dataset evaluations. We also performed ablation studies in these two settings to show the effect of each component. Moreover, we apply the proposed cross-task consistency loss for semi-supervised learning and test our method on the UVO validation set.
4.1 IMPLEMENTATION DETAILS AND EVALUATION METRICS
Implementation details Detectron2 Wu et al. (2019) is used to implement the proposed TOIS framework, multi-scale feature maps are extracted from the ResNet-50 He et al. (2016) or Swin Transformer Liu et al. (2021) model pre-trained on ImageNet Deng et al. (2009). Our transformer encoder-decoder design follows the same architecture as in Mask2Former Cheng et al. (2022). The number of object queries M is set to 100. Both the ResNet and Swin backbones use an initial learning rate of 0.0001 and a weight decay of 0.05. A simple data augmentation method, Cutout DeVries & Taylor (2017), is applied to the training data. All the experiments have been done on 8 NVIDIA V100 GPU cards with 32G memory.
Pseudo-labeling for COCO train set Pseudo-labeling is a common way to handle incomplete annotations. To explore the compatibility of our method and the pseudo-labeling operation, we employ a simple strategy to generate pseudo-labels for unannotated instances in the COCO train set Lin et al. (2014) in our experiments. Specifically, we follow a typical self-training framework, introducing the teacher model and student model framework to generate pseudo-labels. These two models have the same architecture, as shown in Figure 2, but are different in model weights. The weights of the student model are optimized by the common back-propagation, while the weight of the teacher model is updated by computing the exponential moving averages (EMA) of the student model. During training, the image i is first fed into the teacher model to generate some mask predictions. The prediction whose confidence is higher than a certain value would be taken as a pseudo-proposal. The state Sij of the pseudo-proposal pij is determined according to Equation (8).
Sij = { True, if argmax(φ(pij , gi)) ⩽ ε, False, otherwise,
(8)
in which gi means any ground truth instance in the image i. φ denotes the IOU calculating function,
and ε is a threshold to further filter the unreliable pseudo-proposals. Finally, pseudo-proposals with states True would be considered as reliable pseudo-labels. Here, the confidence and IOU threshold ε for selecting pseudo-labels are set to 0.8 and 0.2, respectively. Then, we jointly use the ground truth and the pseudo-labels to form the training data annotations. If a region is identified as belonging to an instance in the pseudo-label, it will be considered as a positive sample during training.
Evaluation metrics The Mean Average Recall (AR) and Mean Average Precision (AP) Lin et al. (2014) are utilized to measure the performance of approaches in a class-agnostic way.
4.2 FULLY-SUPERVISED EXPERIMENTAL SETTING
Intra-dataset evaluation UVO is the largest open-world instance segmentation dataset. Its training and test images are from the same domain, while they do not have any overlap. Here, we perform the learning process of TOIS on the UVO-train subset and conduct the test experiments on the UVO-val subset. Besides, we split the COCO dataset into 20 seen (VOC) classes and 60 unseen (none-VOC) classes. We train a model only on the annotation of 20 VOC classes
and test it on the 60 none-VOC class, evaluating its ability of discovering novel objects.
Cross-dataset evaluation Open-world setting assumes that the instance can be novel classes in the target domain. Therefore, it is essential for the OWIS method to handle the potential domain gap with excellent generalization ability. Cross-dataset evaluation, in which training and test data come from different domains, is necessary to be conducted. Here, we first train the proposed TOIS model and compare methods on the COCO-train subset, while testing them on the UVO-val dataset to evaluate their generalizability. Then we extend the experiments to an autonomous driving scenario, training the models on the Cityscapes Cordts et al. (2016) dataset and evaluating them on the Mapillary Neuhold et al. (2017). Cityscapes have 8 foreground classes, while Mapillary contains 35 foreground classes including vehicles, animals, trash can, mailbox, etc.
4.3 FULLY-SUPERVISED EXPERIMENTAL RESULTS
Intra-dataset evaluation The results are illustrated in Table 1. The single-stage approaches based on the mask classification framework perform better than other two-stage methods. Among them, our proposed TOIS achieves a significant performance improvement over the Mask2Former baseline, which is 4.75% in AP100 and 3.93% in AR100 when using the Swin-B backbone. For VOC→none-VOC setting, the ex-
perimental results are shown in Table 2, which verified that our proposed method can improve the performance for all instances, especially large ones.
Cross-dataset evaluation For the COCO→UVO task, according to Table 3, it is clear that the proposed TOIS outperforms all previous methods, achieving a new state-of-the-art AR100 at 54.86% which is 11.56% higher than previous state-of-the-art method GGN Wang et al. (2022). We also applied the proposed techniques to another classic one-stage method SOLO V2 Wang et al. (2020b). The experimental results in Table 4 show that it improves AR100 and AP100 by 3.11% and 2.79% compared to SOLO V2. For the Cityscapes→ Mapillary task, the overall AP and AR of TOIS still surpass the performance of other state-of-the-art methods ( in Table 5 ), which demonstrates the
Table 4: Results of TOIS with SOLOV2 structure ( UVO-train→ UVO-val).
Table 5: Cross-dataset evaluation on autonomous driving scenes. Results of Cityscapes → Mapillary.
effectiveness of our proposed techniques. We show some of the COCO→UVO visualization results in Figure 4 to qualitatively demonstrate the superiority of our method. Please refer to the supplementary material for more qualitative examples.
4.4 ABLATION STUDY
We perform cross-dataset and intra-dataset ablation studies to analyze the effectiveness of each component in the proposed TOIS, including the foreground prediction branch and the cross-task consistency loss. We also try combinations of the pseudo-label generation strategy and our cross-task consistency loss to investigate the individual and synergetic effects of them. Using the SwinB backbone, these models are trained on the COCO-train subset and the UVO-train subset, respectively. The metrics reported in Table 6 are tested on the UVO-val dataset.
Table 6: Ablation results of the proposed components by cross-dataset and intra-dataset evaluations. Foreground prediction (FP), Cross-task consistency (CTC) loss, Pseudo label (PL).
Component Train on COCO Train on UVO FP CTC loss PL AP100(%) AR100(%) AP100(%) AR100(%) 28.65 51.54 35.12 51.39 ✓ 29.02 51.60 35.55 51.73 ✓ 30.09 52.97 32.94 51.64 ✓ ✓ 31.39 53.83 38.02 54.74 ✓ ✓ 30.17 52.98 33.35 50.90 ✓ ✓ ✓ 32.21 54.86 37.71 52.27
Effectiveness foreground prediction branch Table 6 shows that although a separate foreground prediction branch can guide the method to optimize towards the direction of discovering foreground pixels, it only slightly boosts the performance.
Effectiveness of cross-task consistency loss Cross-task consistency loss has a positive effect on both sparse annotated (COCO) and dense annotated (UVO) training dataset. The values of AP100 and AR100 increase significantly ( 2.74% ↑ and 2.49% ↑ on COCO while 2.90% ↑ and 3.35%↑ on UVO) after applying the crosstask consistency loss as well as the foreground prediction branches together. This result outperforms the TOIS counterpart with only pseudo-labeling, showing our effectiveness. In addition, jointly utilizing our cross-task consistency loss as well as the pseudo-labeling strategy leads to performance improvements on two settings, which demonstrates the synergistic effect of both approaches.
Effectiveness of pseudo-labeling Pseudo-labeling is not always necessary and powerful for any types of datasets. As shown in Table 6, the AP100 and AR100 of the COCO trained model increase by 1.35% and 0.78%, respectively, after applying the pseudo-label generation. However, pseudolabeling causes a performance degradation (e.g. 2.18%↓ in AP100) to a model trained in the UVO dataset. Compared with COCO, the UVO dataset is annotated more densely. We conjecture that the
background annotations of UVO are more reliable than those of COCO, where carefully selected pseudo-labels are more likely to represent unlabeled objects. The generated pseudo-labels of UVO contain higher noises than those of COCO. These additional noisy labels mislead the model training.
4.5 SEMI-SUPERVISED LEARNING EXPERIMENT
Experimental setting We have divided the UVO-train dataset into the labeled subset Dl and the unlabeled subset Du. Semi-supervised model Semi-TOIS is optimized as described in Section 3.5 on Dl ∪DU , while the fully-supervised method Labelled-Only-TOIS is trained merely on the DL. To ensure the comprehensiveness of the experiments, two different data division settings are included in our experiments:{DL=30%, Du=70%} and {DL=50%, Du=50%}. The backbone applied here is Swin-B. We also implemented the classic Mean teacher model and a simple pseudo-label method based on the Mask2Former to perform comparison. Note that our semi-supervised setting is different from that in some previous work, like in MaskXRCNNHu et al. (2018) and ShapemaskKuo et al. (2019). They assume that the training set C = A ∪B, where examples from the categories in A have masks, while those in B have only bounding boxes.
Results and analyss As presented in Figure 5, the Semi-TOIS50 model trained on the UVO with 50% annotated data outperforms the Semi-TOIS30 model learning with 30% labeled training images. However, the performance increase between the Semi-TOIS30 and Semi-TOIS50 is slight. In addition, Semi-TOIS30 improves Labelled-Only-TOIS30 by 3.36% and 5.33% in AP100 and AR100, respectively. Compared to Labelled-Only-TOIS50, Semi-TOIS50 still achieves significant advantages (2.36% in AP100 and 6.12% in AR100). These results reflect that cross-task consistency loss has the ability to extract information from unlabeled data and facilitates model optimization in the semi-supervised setting. It is notable that the results of Semi-TOIS30 are even better than those of LabelledOnly-TOIS50. This illustrates that the information dug out by the cross-task consistency loss from the remaining 70% unlabeled data is more abundant than that included in 20% fully-labeled data. Therefore, our algorithm can achieve better performance with fewer
annotations. This characteristic is promising in solving the OWIS problem. In addition, we also compared the semi-TOIS with classic semi-supervised method and recent end to end segmentation method. The results in Table 7 and 8 show our advantages over the compared methods.
5 CONCLUSION
This paper proposes the first transformer-based framework (TOIS) for the open-world instance segmentation task. Apart from predicting the instance mask and objectness score, our framework introduces a foreground prediction branch to segment the regions belonging to any instance. Utilizing the outputs of this branch, we propose a novel cross-task consistency loss to enforce the foreground prediction to be consistent with the prediction of the instance masks. We experimentally demonstrate that this mechanism alleviates the problem of incomplete annotation, which is a critical issue for open-world segmentation. Our extensive experiments demonstrate that TOIS outperforms state-ofthe-art methods by a large margin on typical datasets. We further demonstrate that our cross-task consistency loss can utilize unlabeled images to obtain some performance gains for a semi-supervised instance segmentation. This is an important step toward reducing laborious and expensive human annotation.
A APPENDIX
Please refer to the additional supplement material. | 1. What is the main contribution of the paper in open-world instance segmentation?
2. What are the strengths and weaknesses of the proposed method compared to existing approaches?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are the issues with the paper's figures and captions?
5. What is the purpose of the foreground prediction branch, and why is it used as an auxiliary supervisory signal?
6. How does the consistency loss alleviate the problem of incomplete instance annotation? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper tackles the task of open-world instance segmentation in images (no predefined set of classes). The proposed method extends on an existing state-of-the-art single-stage (without employing a dedicated proposals procedure) approach, such as Mask2former, through the addition of a foreground segmentation branch. The foreground prediction map is used as an auxiliary supervisory signal for the instance prediction branch to increase the consistency among these maps. The consistency loss applied on top of these predictions also alleviates the problem of incomplete instance annotation which could be considered a core contribution of this work (working with labeled and unlabeled data simultaneously - semi-supervised scenario).
Strengths And Weaknesses
Strengths:
The tackled problem is challenging and of interest to the research community - open-world instance segmentation is far from being solved - since it implies the discovery of novel object instances that were not seen during training
Weaknesses:
Problem statement description could be improved. The introductory section does not properly address the tackled task or the way in which this work contributes to current issues.
Hard to grasp from the paper what are the authors' contributions - I did not find the paper an easy read.
The authors rename the Mask2former architecture as SOIS and wrongfully consider it their contribution since the architecture itself did not change as the authors stated themselves but rather the scenario in which it was used (extension to a semi-supervised scenario by employing a proxy constraint, in the form of a consistency loss between predicted masks of two existing branches in the architecture)
Other comments:
Benchmarks are not referenced when first mentioned - same goes for COCO and UVO in the abstract - at least mention that these are datasets.
Figure 1 is crowded and hard to follow all of its parts. It would make more sense to split it into two parts - a, b, c - Part 1 and d, e, f, g - Part 2. In (b) consistency is misspelled and Cityscapes is misspelled in the figure caption.
Figure 1c, 1d, 1e and 1f are not referenced in the paper.
Figure 2 is not informative at all - What is the purpose of that backbone? - What is the format of the output? What is the input of the FCN in the foreground prediction branch and why not use the RGB image? The image caption does not complement the figure - most of the notations are not explained and the flow of the different branches is very hard to follow. Also, sentences such as "the foreground prediction branch segments a foreground region" can be derived from the naming of these components. Authors should highlight only the information that cannot be inferred solely from the naming.
Clarity, Quality, Novelty And Reproducibility
Clarity - Could be improved. The figure and caption do not complement each other. Also, there are notations in the paper that are not clearly defined (Section 3.2) What is Ki? The number of annotated object instances in a given image i? Should be explicitly specified.
Quality - The paper could be improved in terms of language and structure - there are also a number of typos and missing references that should be addressed. Most parts of Figure 1 are not referenced in the paper.
Novelty - I do not consider the contribution novel, but an extension of an existing state-of-the-art that was applied in a different learning setup that deals with missing annotations - not the fully-supervised case. For this purpose alone the authors apply a consistency loss computed as a segmentation loss between the instance segmentation branch and a foreground prediction branch and use this as a constraint in learning when labels are missing.
Reproducibility - The authors have released the code in their supplementary material - the experiments should be easy to reproduce. |
ICLR | Title
Single-Stage Open-world Instance Segmentation with Cross-task Consistency Regularization
Abstract
Open-World Instance Segmentation (OWIS) is an emerging research topic that aims to segment class-agnostic object instances from images. The mainstream approaches use a two-stage segmentation framework, which first locates the candidate object bounding boxes and then performs instance segmentation. In this work, we instead promote a single-stage transformer-based framework for OWIS. We argue that the end-to-end training process in the single-stage framework can be more convenient for directly regularizing the localization of class-agnostic object pixels. Based on the transformer-based instance segmentation framework, we propose a regularization model to predict foreground pixels and use its relation to instance segmentation to construct a cross-task consistency loss. We show that such a consistency loss could alleviate the problem of incomplete instance annotation – a common problem in the existing OWIS datasets. We also show that the proposed loss lends itself to an effective solution to semi-supervised OWIS that could be considered an extreme case that all object annotations are absent for some images. Our extensive experiments demonstrate that the proposed method achieves impressive results in both fully-supervised and semi-supervised settings. Compared to SOTA methods, the proposed method significantly improves the AP100 score by 4.75% in UVO dataset →UVO dataset setting and 4.05% in COCO dataset →UVO dataset setting. In the case of semi-supervised learning, our model learned with only 30% labeled data, even outperforms its fully-supervised counterpart with 50% labeled data. The code will be released soon.
1 INTRODUCTION
Traditional instance segmentation Lin et al. (2014); Cordts et al. (2016) methods often assume that objects in images can be categorized into a finite set of predefined classes (i.e., closed-world). Such an assumption, however, can be easily violated in many real-world applications, where models will encounter many new object classes that never appeared in the training data. Therefore, researchers recently attempted to tackle the problem of Open-World Instance Segmentation (OWIS) Wang et al. (2021), which targets class-agnostic segmentation of all objects in the image.
Prior to this paper, most existing methods for OWIS are of two-stage Wang et al. (2022); Saito et al. (2021), which detect bounding boxes of objects and then segment them. Despite their promising performances, such a paradigm cannot handle and recover if object bounding boxes are not detected. In contrast, a transformer-based approach called Mask2Former Cheng et al. (2022) has recently been introduced, yet only for closed-world instance segmentation. Based on the Mask2Former, we propose a Transformer-based Open-world Instance Segmentation method named TOIS.
Note that our work is not just a straightforward adaptation of Mask2Former from close-world to open-world. This is because unlike closed-world segmentation, where the object categories can be clearly defined before annotation, the open-world scenario makes it challenging for annotators to label all instances completely or ensure annotation consistency across different images because they cannot have a well-defined finite set of object categories. As shown in Figure 1(a), annotators miss some instances. It still remains challenging that how to handle such incomplete annotations (i.e. some instances missed).
Recent work LDET Saito et al. (2021) addresses this problem by generating synthetic data with a plain background, but based on a decoupled training strategy that can only be used in the two-stage method while our method is of single-stage. Another work called GGN Wang et al. (2022) handles such incomplete instance-level annotation issue by training a pairwise affinity predictor for generating pseudo labels. But training such an additional predictor is complicated and time-consuming.
In contrast, our proposed TOIS method is end-to-end and simpler. We address this incomplete annotation issue via a novel regularization module, which is simple yet effective. Specifically, it is convenient to concurrently predict not only (1) instance masks but also a (2) foreground map. Ideally, as shown in Figure 1(b), the foreground region should be consistent with the union of all instance masks. To penalize their inconsistency, we devise a cross-task consistency loss, which can down-weight the adverse effects caused by incomplete annotation. This is because when an instance is missed in annotation, as long as it is captured by both our predictions of instance masks and foreground map, the consistency loss would be low and hence encourage such prediction. Experiments in Figure 1(g) show that such consistency loss is effective even when annotations miss many instances. And as in Figure 1(c), novel objects which are unannotated in training set have been segmented successfully by our method.
So far, like most existing methods, we focus on the fully-supervised OWIS. In this paper, we further extend OWIS to the semi-supervised setting, where some training images do not have any annotations at all. This is of great interest because annotating segmentation map is very costly. Notably, our proposed regularization module can also benefit semi-supervised OWIS – consider an unlabeled image as an extreme case of incomplete annotation where all of the instance annotations are missed. Specifically, we perform semi-supervised OWIS by first warming up the network on the labeled set and then continuing training it with the cross-task consistency loss on the mixture of labeled and unlabeled images.
Contributions. In a nutshell, our main contributions could be summarized as:
1. Building upon a recently-proposed close-world segmentation method of Mask2Former, we propose a Transformer-based Open-world Instance Segmentation (TOIS) method.
2. We propose a novel cross-task consistency loss that mitigates the issue of incomplete mask annotations, which is a critical issue for open-world segmentation in particular.
3. We further extend the proposed method into a semi-supervised OWIS model, which effectively makes use of the unlabeled images to help the OWIS model training .
4. Our extensive experiments demonstrate that the proposed method reaches the leading OWIS performance in the fully-supervised learning. (Figure 1(d-f)), and that our semi-supervised extension can achieve remarkable performance with a much smaller amount of labeled data.
2 RELATED WORK
Closed-world instance segmentation (CWIS) He et al. (2017); Chen et al. (2020); Dai et al. (2016); Bolya et al. (2019); Wang et al. (2020a) requires the approaches to assign a class label and instance ID to every pixel. Two-stage CWIS approaches, such as MaskRCNN, always include a bounding box estimation branch and a FCN-based mask segmentation branch, working in a ’detectthen-segment’ way. To improve efficiency, one-stage methods such as CenterMask Dai et al. (2016), YOLACT Bolya et al. (2019) and BlendMask Chen et al. (2020) have been proposed, which remove the proposal generation and feature grouping process. To further free the CWIS from the local box detection, Wang et.al Wang et al. (2020a) proposed SOLO and obtained on par results to the above methods. In recent years, the methods Fang et al. (2021); Dong et al. (2021), following DETR Carion et al. (2020), consider the instance segmentation task as an ensemble prediction problem. In addition, Cheng et al. proposed an universal segmentation framework MaskFormer Cheng et al. (2021) and its upgrade version Mask2Former Cheng et al. (2022), which even outperforms the state-of-the-art architectures specifically designed for the CWIS task.
Notably, two-stage method CenterMask preserves pixel alignment and separates the object simultaneously by integrating the local and global branch. Although introducing the global information in this way helps improve the mask quality in CWIS, it cannot handle the open-world task very well, because CenterMask multiplies the local shape and the cropped saliency map to form the final mask for each instance. There is no separate loss for the local shape and global saliency. When such a method faces the incomplete annotations in OWIS tasks, the generated mask predictions corresponding to the unlabeled instances would still be punished during training, making it difficult to discover novel objects at inference. The efficient way to jointly take advantages of global and local information in OWIS tasks deserves to be explored.
Open-world instance segmentation OWIS task Wang et al. (2021) here focuses on the following aspects: (1) All instances (without stuff) have to be segmented; (2) Class-agnostic pixel-level results should be predicted with only instance ID and incremental learning ability is unnecessary. Several OWIS works have recently been developed. Yu et al. Du et al. (2021) proposed a twostage segmentation algorithm, which decoupled the segmentation and detection modules during training and testing. This algorithm achieves competitive results on the UVO dataset thanks to the abundant training data and the introduction of effective modules such as cascade RPN Vu et al. (2019), SimOTA Ge et al. (2021), etc. Another work named LDET Saito et al. (2021) attempts to solve the instance-level incomplete annotation problem. Specifically, LDET first generates the background of the synthesized image by taking a small piece of background in the original image and enlarging it to the same size as the original image. The instance is then matted to the foreground of the synthesized image. The synthesized data is used only to train the mask prediction branch, and the rest of the branches are still trained with the original data. Meanwhile, Wang et al. proposed GGN Wang et al. (2022), an algorithm that combines top-down and bottom-up segmentation ideas to improve prediction accuracy by generating high-quality pseudo-labels. Specifically, a Pairwise Affinity (PA) predictor is trained first, and a grouping module is used to extract and rank segments from predicted PA to generate pseudo-labels, which would be fused with groundtruth to train the segmentation model.
3 METHODOLOGY
In this section, we first define the OWIS problem with both fully and semi-supervised learning. Then the architecture of our TOIS and the proposed cross-task consistency loss are introduced in Section
3.3 and 3.4, respectively. Finally, Section 3.4 and 3.5 show how to optimize the TOIS in fully and semi-supervised way, respectively.
3.1 PROBLEM DEFINITION OF OWIS
The open-world instance segmentation (OWIS) aims to segment all the object instances (things) of any class including those that did not appear in the training phase. Technically, OWIS is a task to produce a set of binary masks, where each mask corresponds to a class-agnostic instance. The pixel value of 1 in the mask indicates a part of an object instance while 0 indicates not.
3.2 MODEL ARCHITECTURE
Our proposed TOIS framework consists of three branches to alleviate the incomplete annotating, as shown in Figure 2. Basically, we follow the design of one-stage Mask2Former Cheng et al. (2022). The objectness prediction branch estimates the weighting score for each mask by applying a sequential Transformer decoder and MLP. The mask prediction branch predicts the binary mask for each instance. It first generates N binary masks with N ideally larger than the actual instance number Ki, which is the number of annotated object instances in a given image i. Each mask is multiplied by a weighting score with a value between 0 and 1, indicating if a mask should be selected as an instance mask. This process generates the mask in an end to end way, which avoids to miss the instance because of poor detection bounding boxes and meanwhile reduces the redundant segmentation cost for each proposal. We refer to Cheng et al. (2021; 2022) for more details.
The foreground prediction branch is a light-weight fully convolutional network to estimate the foreground regions that belong to any object instance.The more detailed design of the foreground prediction branch is in the Appendix. This guides the training of the mask branch through our cross-task consistency loss proposed in the following Sec. 3.3. Once training is done, we discard this branch and only use the objectness and mask prediction branch at inference time. Therefore, we would not introduce any additional parameter or computational redundancy, which benefits the running efficiency.
3.3 LEARNING WITH THE CROSS-TASK CONSISTENCY REGULARIZATION
A critical limitation of the OWIS is the never-perfect annotations due to the difficulties in annotating class-agnostic object instances. Towards alleviating this issue, we propose a regularization to provide extra supervision to guide the OWIS model training under incomplete annotations.
We construct a branch to predict the foreground regions that belong to any of the object instances. Formally we create the the foreground annotation G(x, y) calculated by
G(x, y) =
{ 0, if ∑K i=1 g i(x, y) == 0
1, otherwise, (1)
where gi(x, y) is one of the K annotated object instances for the current image and the union of gi defines the foreground object regions. Here (x, y) denotes a coordinate of a pixel in the an image. We use G(x, y) as labels to train the foreground prediction branch.
Our consistency loss encourages the model outputs to have the relationship indicated in Eq 1, which states that the the foreground prediction should be the union of instance predictions. To do so, we use the following equation as an estimate of the foreground from the instance prediction:
Ĝ(x, y) = Φ K∑ j=1 mj(x, y) , (2) where mj means the confidence of pixels in j-th predicted mask, and Φ represents the Sigmoid function. Then, let the foreground prediction from the foreground prediction branch be F , our cross-task consistency loss is to make F and Ĝ(x, y) consistent, which finally leads to the following loss function.
Lc = DICE(Ĝ,F ) + BCE(Ĝ,F ), (3)
where DICE and BCE denote the dice-coefficient loss Milletari et al. (2016) and binary cross-entropy loss, respectively.
Consistency loss enjoys the following appealing properties. It is self-calibrated and independent with the incompleteness level of labels. As shown in Figure 3, for a instance mistakenly annotated as background, but the foreground prediction branch and mask prediction branch both correctly find it, the model would be punished through mask loss and foreground loss. However, the consistency loss thinks this prediction is correct. In this way, consistency loss down-weights the adverse effects caused by other unreliable segmentation losses. The mitigation and the compensation factor synergize to relieve the overwhelming punishments on unlabeled instances.
3.4 FULLY-SUPERVISED LEARNING
The overall fully-supervised optimization of the proposed TOIS is carried out by minimizing the following joint loss formulation Lf ,
Lf = αLm + βLp + γLc + ωLo, (4) where Lm = BCE(m,g) + DICE(m,g), (5)
Lp = BCE(F,G) + DICE(F,G), (6) Lo = BCE(s,v), (7)
where Lm, Lp and Lo denote the loss terms for mask prediction, foreground prediction, and objectness scoring, respectively. α, β, γ and ω are the weights of the corresponding losses. m and g represent the predicted masks and corresponding groundtruth, respectively. F and G is the foreground prediction result and the generated foreground groundtruth, while the estimated objectness score is denoted with s. v is a set of binary values that indicate whether each mask is an instance. Before computing the Lm, matching between the set of predicted masks and groundtruth has been done via the bipartite matching algorithm defined in Cheng et al. (2022).
3.5 EXTENSION TO SEMI-SUPERVISED LEARNING
Due to the ambiguity of the instance definition in OWIS, it is much harder for the annotators to follow the annotation instruction, and this could make the annotations for OWIS expensive. It is desirable if we can use unlabeled data to help train OWIS models. In this regard, our proposed cross-task consistency loss only requires the outputs of both predictors to have a consistent relationship indicated in Eq1, and does not always need ground truth annotations. Thus, we apply this loss to unlabeled data, which becomes semi-supervised learning. Specifically, the easier-to-learn foreground prediction branch is able to learn well through a few labeled images in the warm-up stage. Then the resulted foreground map can serve as a constraint to optimize the open-world mask predictions with the help of our cross-task consistency loss, when the labels do not exist. In this way, our Semi-TOIS achieves a good trade-off between the annotation cost and model accuracy.
Semi-supervised learning process. Given a labeled set Dl = {(xi, yi)}Nli=1 and an unlabeled set Du = {xi}Nui=1, our goal is to train an OWIS model by leveraging both a large amount of unlabeled data and a smaller set of labeled data. Specifically, we initially use Dl to train the TOIS as a warm-up stage, giving a good initialization for the model. We then jointly train the OWIS model on the both labeled and unlabeled data. For the labeled data, we employ the loss function defined in Eq 4. For the unlabeled data, we apply only the cross-task consistency loss Lc.
4 EXPERIMENTS
For demonstrating the effectiveness of our proposed TOIS, we compared it with other fully-supervised methods through intra-dataset and cross-dataset evaluations. We also performed ablation studies in these two settings to show the effect of each component. Moreover, we apply the proposed cross-task consistency loss for semi-supervised learning and test our method on the UVO validation set.
4.1 IMPLEMENTATION DETAILS AND EVALUATION METRICS
Implementation details Detectron2 Wu et al. (2019) is used to implement the proposed TOIS framework, multi-scale feature maps are extracted from the ResNet-50 He et al. (2016) or Swin Transformer Liu et al. (2021) model pre-trained on ImageNet Deng et al. (2009). Our transformer encoder-decoder design follows the same architecture as in Mask2Former Cheng et al. (2022). The number of object queries M is set to 100. Both the ResNet and Swin backbones use an initial learning rate of 0.0001 and a weight decay of 0.05. A simple data augmentation method, Cutout DeVries & Taylor (2017), is applied to the training data. All the experiments have been done on 8 NVIDIA V100 GPU cards with 32G memory.
Pseudo-labeling for COCO train set Pseudo-labeling is a common way to handle incomplete annotations. To explore the compatibility of our method and the pseudo-labeling operation, we employ a simple strategy to generate pseudo-labels for unannotated instances in the COCO train set Lin et al. (2014) in our experiments. Specifically, we follow a typical self-training framework, introducing the teacher model and student model framework to generate pseudo-labels. These two models have the same architecture, as shown in Figure 2, but are different in model weights. The weights of the student model are optimized by the common back-propagation, while the weight of the teacher model is updated by computing the exponential moving averages (EMA) of the student model. During training, the image i is first fed into the teacher model to generate some mask predictions. The prediction whose confidence is higher than a certain value would be taken as a pseudo-proposal. The state Sij of the pseudo-proposal pij is determined according to Equation (8).
Sij = { True, if argmax(φ(pij , gi)) ⩽ ε, False, otherwise,
(8)
in which gi means any ground truth instance in the image i. φ denotes the IOU calculating function,
and ε is a threshold to further filter the unreliable pseudo-proposals. Finally, pseudo-proposals with states True would be considered as reliable pseudo-labels. Here, the confidence and IOU threshold ε for selecting pseudo-labels are set to 0.8 and 0.2, respectively. Then, we jointly use the ground truth and the pseudo-labels to form the training data annotations. If a region is identified as belonging to an instance in the pseudo-label, it will be considered as a positive sample during training.
Evaluation metrics The Mean Average Recall (AR) and Mean Average Precision (AP) Lin et al. (2014) are utilized to measure the performance of approaches in a class-agnostic way.
4.2 FULLY-SUPERVISED EXPERIMENTAL SETTING
Intra-dataset evaluation UVO is the largest open-world instance segmentation dataset. Its training and test images are from the same domain, while they do not have any overlap. Here, we perform the learning process of TOIS on the UVO-train subset and conduct the test experiments on the UVO-val subset. Besides, we split the COCO dataset into 20 seen (VOC) classes and 60 unseen (none-VOC) classes. We train a model only on the annotation of 20 VOC classes
and test it on the 60 none-VOC class, evaluating its ability of discovering novel objects.
Cross-dataset evaluation Open-world setting assumes that the instance can be novel classes in the target domain. Therefore, it is essential for the OWIS method to handle the potential domain gap with excellent generalization ability. Cross-dataset evaluation, in which training and test data come from different domains, is necessary to be conducted. Here, we first train the proposed TOIS model and compare methods on the COCO-train subset, while testing them on the UVO-val dataset to evaluate their generalizability. Then we extend the experiments to an autonomous driving scenario, training the models on the Cityscapes Cordts et al. (2016) dataset and evaluating them on the Mapillary Neuhold et al. (2017). Cityscapes have 8 foreground classes, while Mapillary contains 35 foreground classes including vehicles, animals, trash can, mailbox, etc.
4.3 FULLY-SUPERVISED EXPERIMENTAL RESULTS
Intra-dataset evaluation The results are illustrated in Table 1. The single-stage approaches based on the mask classification framework perform better than other two-stage methods. Among them, our proposed TOIS achieves a significant performance improvement over the Mask2Former baseline, which is 4.75% in AP100 and 3.93% in AR100 when using the Swin-B backbone. For VOC→none-VOC setting, the ex-
perimental results are shown in Table 2, which verified that our proposed method can improve the performance for all instances, especially large ones.
Cross-dataset evaluation For the COCO→UVO task, according to Table 3, it is clear that the proposed TOIS outperforms all previous methods, achieving a new state-of-the-art AR100 at 54.86% which is 11.56% higher than previous state-of-the-art method GGN Wang et al. (2022). We also applied the proposed techniques to another classic one-stage method SOLO V2 Wang et al. (2020b). The experimental results in Table 4 show that it improves AR100 and AP100 by 3.11% and 2.79% compared to SOLO V2. For the Cityscapes→ Mapillary task, the overall AP and AR of TOIS still surpass the performance of other state-of-the-art methods ( in Table 5 ), which demonstrates the
Table 4: Results of TOIS with SOLOV2 structure ( UVO-train→ UVO-val).
Table 5: Cross-dataset evaluation on autonomous driving scenes. Results of Cityscapes → Mapillary.
effectiveness of our proposed techniques. We show some of the COCO→UVO visualization results in Figure 4 to qualitatively demonstrate the superiority of our method. Please refer to the supplementary material for more qualitative examples.
4.4 ABLATION STUDY
We perform cross-dataset and intra-dataset ablation studies to analyze the effectiveness of each component in the proposed TOIS, including the foreground prediction branch and the cross-task consistency loss. We also try combinations of the pseudo-label generation strategy and our cross-task consistency loss to investigate the individual and synergetic effects of them. Using the SwinB backbone, these models are trained on the COCO-train subset and the UVO-train subset, respectively. The metrics reported in Table 6 are tested on the UVO-val dataset.
Table 6: Ablation results of the proposed components by cross-dataset and intra-dataset evaluations. Foreground prediction (FP), Cross-task consistency (CTC) loss, Pseudo label (PL).
Component Train on COCO Train on UVO FP CTC loss PL AP100(%) AR100(%) AP100(%) AR100(%) 28.65 51.54 35.12 51.39 ✓ 29.02 51.60 35.55 51.73 ✓ 30.09 52.97 32.94 51.64 ✓ ✓ 31.39 53.83 38.02 54.74 ✓ ✓ 30.17 52.98 33.35 50.90 ✓ ✓ ✓ 32.21 54.86 37.71 52.27
Effectiveness foreground prediction branch Table 6 shows that although a separate foreground prediction branch can guide the method to optimize towards the direction of discovering foreground pixels, it only slightly boosts the performance.
Effectiveness of cross-task consistency loss Cross-task consistency loss has a positive effect on both sparse annotated (COCO) and dense annotated (UVO) training dataset. The values of AP100 and AR100 increase significantly ( 2.74% ↑ and 2.49% ↑ on COCO while 2.90% ↑ and 3.35%↑ on UVO) after applying the crosstask consistency loss as well as the foreground prediction branches together. This result outperforms the TOIS counterpart with only pseudo-labeling, showing our effectiveness. In addition, jointly utilizing our cross-task consistency loss as well as the pseudo-labeling strategy leads to performance improvements on two settings, which demonstrates the synergistic effect of both approaches.
Effectiveness of pseudo-labeling Pseudo-labeling is not always necessary and powerful for any types of datasets. As shown in Table 6, the AP100 and AR100 of the COCO trained model increase by 1.35% and 0.78%, respectively, after applying the pseudo-label generation. However, pseudolabeling causes a performance degradation (e.g. 2.18%↓ in AP100) to a model trained in the UVO dataset. Compared with COCO, the UVO dataset is annotated more densely. We conjecture that the
background annotations of UVO are more reliable than those of COCO, where carefully selected pseudo-labels are more likely to represent unlabeled objects. The generated pseudo-labels of UVO contain higher noises than those of COCO. These additional noisy labels mislead the model training.
4.5 SEMI-SUPERVISED LEARNING EXPERIMENT
Experimental setting We have divided the UVO-train dataset into the labeled subset Dl and the unlabeled subset Du. Semi-supervised model Semi-TOIS is optimized as described in Section 3.5 on Dl ∪DU , while the fully-supervised method Labelled-Only-TOIS is trained merely on the DL. To ensure the comprehensiveness of the experiments, two different data division settings are included in our experiments:{DL=30%, Du=70%} and {DL=50%, Du=50%}. The backbone applied here is Swin-B. We also implemented the classic Mean teacher model and a simple pseudo-label method based on the Mask2Former to perform comparison. Note that our semi-supervised setting is different from that in some previous work, like in MaskXRCNNHu et al. (2018) and ShapemaskKuo et al. (2019). They assume that the training set C = A ∪B, where examples from the categories in A have masks, while those in B have only bounding boxes.
Results and analyss As presented in Figure 5, the Semi-TOIS50 model trained on the UVO with 50% annotated data outperforms the Semi-TOIS30 model learning with 30% labeled training images. However, the performance increase between the Semi-TOIS30 and Semi-TOIS50 is slight. In addition, Semi-TOIS30 improves Labelled-Only-TOIS30 by 3.36% and 5.33% in AP100 and AR100, respectively. Compared to Labelled-Only-TOIS50, Semi-TOIS50 still achieves significant advantages (2.36% in AP100 and 6.12% in AR100). These results reflect that cross-task consistency loss has the ability to extract information from unlabeled data and facilitates model optimization in the semi-supervised setting. It is notable that the results of Semi-TOIS30 are even better than those of LabelledOnly-TOIS50. This illustrates that the information dug out by the cross-task consistency loss from the remaining 70% unlabeled data is more abundant than that included in 20% fully-labeled data. Therefore, our algorithm can achieve better performance with fewer
annotations. This characteristic is promising in solving the OWIS problem. In addition, we also compared the semi-TOIS with classic semi-supervised method and recent end to end segmentation method. The results in Table 7 and 8 show our advantages over the compared methods.
5 CONCLUSION
This paper proposes the first transformer-based framework (TOIS) for the open-world instance segmentation task. Apart from predicting the instance mask and objectness score, our framework introduces a foreground prediction branch to segment the regions belonging to any instance. Utilizing the outputs of this branch, we propose a novel cross-task consistency loss to enforce the foreground prediction to be consistent with the prediction of the instance masks. We experimentally demonstrate that this mechanism alleviates the problem of incomplete annotation, which is a critical issue for open-world segmentation. Our extensive experiments demonstrate that TOIS outperforms state-ofthe-art methods by a large margin on typical datasets. We further demonstrate that our cross-task consistency loss can utilize unlabeled images to obtain some performance gains for a semi-supervised instance segmentation. This is an important step toward reducing laborious and expensive human annotation.
A APPENDIX
Please refer to the additional supplement material. | 1. What is the main contribution of the paper regarding instance segmentation?
2. What are the strengths of the proposed approach, particularly in addressing the incomplete annotation problem?
3. What are the weaknesses of the paper, especially regarding its reliance on existing methods?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work leverage the Mask2Former for single-stage instance segmentation plus proposes cross-task consistency regularization to achieve the class-agnostic segmentation of all objects in an image. Specifically, the proposed method uses a foreground prediction branch, similar to the idea of saliency prediction, to help the model learn the objects in an image, then encourage the sum of the predicted objects' map to resemble the foreground map. The proposed approach effectively alleviates the incomplete annotation problem in common datasets, furthermore, it is proven to outperform SOTA models.
Strengths And Weaknesses
Strength:
This paper is easy to follow since the core idea is presented clearly.
The method can help address the missing annotation problems in the public benchmark datasets and further boost the accuracy of SOTA.
The method can enhance the model performance without introducing further model parameters.
The method is compatible with utilizing unlabeled images for semi-supervised learning.
Weakness:
Since the single-stage instance segmentation is from the Mask2Former, the core contribution seems to become cross-task consistency regularization loss term only.
There are a few typos, replicated citations, and redundant spacing in the context.
Clarity, Quality, Novelty And Reproducibility
The paper is well written, and along with the figures, it is easy to follow overall.
The code will be released if accepted as stated in the paper, and I guess the blind reproducibility on the proposed consistency loss seems doable. |
ICLR | Title
Single-Stage Open-world Instance Segmentation with Cross-task Consistency Regularization
Abstract
Open-World Instance Segmentation (OWIS) is an emerging research topic that aims to segment class-agnostic object instances from images. The mainstream approaches use a two-stage segmentation framework, which first locates the candidate object bounding boxes and then performs instance segmentation. In this work, we instead promote a single-stage transformer-based framework for OWIS. We argue that the end-to-end training process in the single-stage framework can be more convenient for directly regularizing the localization of class-agnostic object pixels. Based on the transformer-based instance segmentation framework, we propose a regularization model to predict foreground pixels and use its relation to instance segmentation to construct a cross-task consistency loss. We show that such a consistency loss could alleviate the problem of incomplete instance annotation – a common problem in the existing OWIS datasets. We also show that the proposed loss lends itself to an effective solution to semi-supervised OWIS that could be considered an extreme case that all object annotations are absent for some images. Our extensive experiments demonstrate that the proposed method achieves impressive results in both fully-supervised and semi-supervised settings. Compared to SOTA methods, the proposed method significantly improves the AP100 score by 4.75% in UVO dataset →UVO dataset setting and 4.05% in COCO dataset →UVO dataset setting. In the case of semi-supervised learning, our model learned with only 30% labeled data, even outperforms its fully-supervised counterpart with 50% labeled data. The code will be released soon.
1 INTRODUCTION
Traditional instance segmentation Lin et al. (2014); Cordts et al. (2016) methods often assume that objects in images can be categorized into a finite set of predefined classes (i.e., closed-world). Such an assumption, however, can be easily violated in many real-world applications, where models will encounter many new object classes that never appeared in the training data. Therefore, researchers recently attempted to tackle the problem of Open-World Instance Segmentation (OWIS) Wang et al. (2021), which targets class-agnostic segmentation of all objects in the image.
Prior to this paper, most existing methods for OWIS are of two-stage Wang et al. (2022); Saito et al. (2021), which detect bounding boxes of objects and then segment them. Despite their promising performances, such a paradigm cannot handle and recover if object bounding boxes are not detected. In contrast, a transformer-based approach called Mask2Former Cheng et al. (2022) has recently been introduced, yet only for closed-world instance segmentation. Based on the Mask2Former, we propose a Transformer-based Open-world Instance Segmentation method named TOIS.
Note that our work is not just a straightforward adaptation of Mask2Former from close-world to open-world. This is because unlike closed-world segmentation, where the object categories can be clearly defined before annotation, the open-world scenario makes it challenging for annotators to label all instances completely or ensure annotation consistency across different images because they cannot have a well-defined finite set of object categories. As shown in Figure 1(a), annotators miss some instances. It still remains challenging that how to handle such incomplete annotations (i.e. some instances missed).
Recent work LDET Saito et al. (2021) addresses this problem by generating synthetic data with a plain background, but based on a decoupled training strategy that can only be used in the two-stage method while our method is of single-stage. Another work called GGN Wang et al. (2022) handles such incomplete instance-level annotation issue by training a pairwise affinity predictor for generating pseudo labels. But training such an additional predictor is complicated and time-consuming.
In contrast, our proposed TOIS method is end-to-end and simpler. We address this incomplete annotation issue via a novel regularization module, which is simple yet effective. Specifically, it is convenient to concurrently predict not only (1) instance masks but also a (2) foreground map. Ideally, as shown in Figure 1(b), the foreground region should be consistent with the union of all instance masks. To penalize their inconsistency, we devise a cross-task consistency loss, which can down-weight the adverse effects caused by incomplete annotation. This is because when an instance is missed in annotation, as long as it is captured by both our predictions of instance masks and foreground map, the consistency loss would be low and hence encourage such prediction. Experiments in Figure 1(g) show that such consistency loss is effective even when annotations miss many instances. And as in Figure 1(c), novel objects which are unannotated in training set have been segmented successfully by our method.
So far, like most existing methods, we focus on the fully-supervised OWIS. In this paper, we further extend OWIS to the semi-supervised setting, where some training images do not have any annotations at all. This is of great interest because annotating segmentation map is very costly. Notably, our proposed regularization module can also benefit semi-supervised OWIS – consider an unlabeled image as an extreme case of incomplete annotation where all of the instance annotations are missed. Specifically, we perform semi-supervised OWIS by first warming up the network on the labeled set and then continuing training it with the cross-task consistency loss on the mixture of labeled and unlabeled images.
Contributions. In a nutshell, our main contributions could be summarized as:
1. Building upon a recently-proposed close-world segmentation method of Mask2Former, we propose a Transformer-based Open-world Instance Segmentation (TOIS) method.
2. We propose a novel cross-task consistency loss that mitigates the issue of incomplete mask annotations, which is a critical issue for open-world segmentation in particular.
3. We further extend the proposed method into a semi-supervised OWIS model, which effectively makes use of the unlabeled images to help the OWIS model training .
4. Our extensive experiments demonstrate that the proposed method reaches the leading OWIS performance in the fully-supervised learning. (Figure 1(d-f)), and that our semi-supervised extension can achieve remarkable performance with a much smaller amount of labeled data.
2 RELATED WORK
Closed-world instance segmentation (CWIS) He et al. (2017); Chen et al. (2020); Dai et al. (2016); Bolya et al. (2019); Wang et al. (2020a) requires the approaches to assign a class label and instance ID to every pixel. Two-stage CWIS approaches, such as MaskRCNN, always include a bounding box estimation branch and a FCN-based mask segmentation branch, working in a ’detectthen-segment’ way. To improve efficiency, one-stage methods such as CenterMask Dai et al. (2016), YOLACT Bolya et al. (2019) and BlendMask Chen et al. (2020) have been proposed, which remove the proposal generation and feature grouping process. To further free the CWIS from the local box detection, Wang et.al Wang et al. (2020a) proposed SOLO and obtained on par results to the above methods. In recent years, the methods Fang et al. (2021); Dong et al. (2021), following DETR Carion et al. (2020), consider the instance segmentation task as an ensemble prediction problem. In addition, Cheng et al. proposed an universal segmentation framework MaskFormer Cheng et al. (2021) and its upgrade version Mask2Former Cheng et al. (2022), which even outperforms the state-of-the-art architectures specifically designed for the CWIS task.
Notably, two-stage method CenterMask preserves pixel alignment and separates the object simultaneously by integrating the local and global branch. Although introducing the global information in this way helps improve the mask quality in CWIS, it cannot handle the open-world task very well, because CenterMask multiplies the local shape and the cropped saliency map to form the final mask for each instance. There is no separate loss for the local shape and global saliency. When such a method faces the incomplete annotations in OWIS tasks, the generated mask predictions corresponding to the unlabeled instances would still be punished during training, making it difficult to discover novel objects at inference. The efficient way to jointly take advantages of global and local information in OWIS tasks deserves to be explored.
Open-world instance segmentation OWIS task Wang et al. (2021) here focuses on the following aspects: (1) All instances (without stuff) have to be segmented; (2) Class-agnostic pixel-level results should be predicted with only instance ID and incremental learning ability is unnecessary. Several OWIS works have recently been developed. Yu et al. Du et al. (2021) proposed a twostage segmentation algorithm, which decoupled the segmentation and detection modules during training and testing. This algorithm achieves competitive results on the UVO dataset thanks to the abundant training data and the introduction of effective modules such as cascade RPN Vu et al. (2019), SimOTA Ge et al. (2021), etc. Another work named LDET Saito et al. (2021) attempts to solve the instance-level incomplete annotation problem. Specifically, LDET first generates the background of the synthesized image by taking a small piece of background in the original image and enlarging it to the same size as the original image. The instance is then matted to the foreground of the synthesized image. The synthesized data is used only to train the mask prediction branch, and the rest of the branches are still trained with the original data. Meanwhile, Wang et al. proposed GGN Wang et al. (2022), an algorithm that combines top-down and bottom-up segmentation ideas to improve prediction accuracy by generating high-quality pseudo-labels. Specifically, a Pairwise Affinity (PA) predictor is trained first, and a grouping module is used to extract and rank segments from predicted PA to generate pseudo-labels, which would be fused with groundtruth to train the segmentation model.
3 METHODOLOGY
In this section, we first define the OWIS problem with both fully and semi-supervised learning. Then the architecture of our TOIS and the proposed cross-task consistency loss are introduced in Section
3.3 and 3.4, respectively. Finally, Section 3.4 and 3.5 show how to optimize the TOIS in fully and semi-supervised way, respectively.
3.1 PROBLEM DEFINITION OF OWIS
The open-world instance segmentation (OWIS) aims to segment all the object instances (things) of any class including those that did not appear in the training phase. Technically, OWIS is a task to produce a set of binary masks, where each mask corresponds to a class-agnostic instance. The pixel value of 1 in the mask indicates a part of an object instance while 0 indicates not.
3.2 MODEL ARCHITECTURE
Our proposed TOIS framework consists of three branches to alleviate the incomplete annotating, as shown in Figure 2. Basically, we follow the design of one-stage Mask2Former Cheng et al. (2022). The objectness prediction branch estimates the weighting score for each mask by applying a sequential Transformer decoder and MLP. The mask prediction branch predicts the binary mask for each instance. It first generates N binary masks with N ideally larger than the actual instance number Ki, which is the number of annotated object instances in a given image i. Each mask is multiplied by a weighting score with a value between 0 and 1, indicating if a mask should be selected as an instance mask. This process generates the mask in an end to end way, which avoids to miss the instance because of poor detection bounding boxes and meanwhile reduces the redundant segmentation cost for each proposal. We refer to Cheng et al. (2021; 2022) for more details.
The foreground prediction branch is a light-weight fully convolutional network to estimate the foreground regions that belong to any object instance.The more detailed design of the foreground prediction branch is in the Appendix. This guides the training of the mask branch through our cross-task consistency loss proposed in the following Sec. 3.3. Once training is done, we discard this branch and only use the objectness and mask prediction branch at inference time. Therefore, we would not introduce any additional parameter or computational redundancy, which benefits the running efficiency.
3.3 LEARNING WITH THE CROSS-TASK CONSISTENCY REGULARIZATION
A critical limitation of the OWIS is the never-perfect annotations due to the difficulties in annotating class-agnostic object instances. Towards alleviating this issue, we propose a regularization to provide extra supervision to guide the OWIS model training under incomplete annotations.
We construct a branch to predict the foreground regions that belong to any of the object instances. Formally we create the the foreground annotation G(x, y) calculated by
G(x, y) =
{ 0, if ∑K i=1 g i(x, y) == 0
1, otherwise, (1)
where gi(x, y) is one of the K annotated object instances for the current image and the union of gi defines the foreground object regions. Here (x, y) denotes a coordinate of a pixel in the an image. We use G(x, y) as labels to train the foreground prediction branch.
Our consistency loss encourages the model outputs to have the relationship indicated in Eq 1, which states that the the foreground prediction should be the union of instance predictions. To do so, we use the following equation as an estimate of the foreground from the instance prediction:
Ĝ(x, y) = Φ K∑ j=1 mj(x, y) , (2) where mj means the confidence of pixels in j-th predicted mask, and Φ represents the Sigmoid function. Then, let the foreground prediction from the foreground prediction branch be F , our cross-task consistency loss is to make F and Ĝ(x, y) consistent, which finally leads to the following loss function.
Lc = DICE(Ĝ,F ) + BCE(Ĝ,F ), (3)
where DICE and BCE denote the dice-coefficient loss Milletari et al. (2016) and binary cross-entropy loss, respectively.
Consistency loss enjoys the following appealing properties. It is self-calibrated and independent with the incompleteness level of labels. As shown in Figure 3, for a instance mistakenly annotated as background, but the foreground prediction branch and mask prediction branch both correctly find it, the model would be punished through mask loss and foreground loss. However, the consistency loss thinks this prediction is correct. In this way, consistency loss down-weights the adverse effects caused by other unreliable segmentation losses. The mitigation and the compensation factor synergize to relieve the overwhelming punishments on unlabeled instances.
3.4 FULLY-SUPERVISED LEARNING
The overall fully-supervised optimization of the proposed TOIS is carried out by minimizing the following joint loss formulation Lf ,
Lf = αLm + βLp + γLc + ωLo, (4) where Lm = BCE(m,g) + DICE(m,g), (5)
Lp = BCE(F,G) + DICE(F,G), (6) Lo = BCE(s,v), (7)
where Lm, Lp and Lo denote the loss terms for mask prediction, foreground prediction, and objectness scoring, respectively. α, β, γ and ω are the weights of the corresponding losses. m and g represent the predicted masks and corresponding groundtruth, respectively. F and G is the foreground prediction result and the generated foreground groundtruth, while the estimated objectness score is denoted with s. v is a set of binary values that indicate whether each mask is an instance. Before computing the Lm, matching between the set of predicted masks and groundtruth has been done via the bipartite matching algorithm defined in Cheng et al. (2022).
3.5 EXTENSION TO SEMI-SUPERVISED LEARNING
Due to the ambiguity of the instance definition in OWIS, it is much harder for the annotators to follow the annotation instruction, and this could make the annotations for OWIS expensive. It is desirable if we can use unlabeled data to help train OWIS models. In this regard, our proposed cross-task consistency loss only requires the outputs of both predictors to have a consistent relationship indicated in Eq1, and does not always need ground truth annotations. Thus, we apply this loss to unlabeled data, which becomes semi-supervised learning. Specifically, the easier-to-learn foreground prediction branch is able to learn well through a few labeled images in the warm-up stage. Then the resulted foreground map can serve as a constraint to optimize the open-world mask predictions with the help of our cross-task consistency loss, when the labels do not exist. In this way, our Semi-TOIS achieves a good trade-off between the annotation cost and model accuracy.
Semi-supervised learning process. Given a labeled set Dl = {(xi, yi)}Nli=1 and an unlabeled set Du = {xi}Nui=1, our goal is to train an OWIS model by leveraging both a large amount of unlabeled data and a smaller set of labeled data. Specifically, we initially use Dl to train the TOIS as a warm-up stage, giving a good initialization for the model. We then jointly train the OWIS model on the both labeled and unlabeled data. For the labeled data, we employ the loss function defined in Eq 4. For the unlabeled data, we apply only the cross-task consistency loss Lc.
4 EXPERIMENTS
For demonstrating the effectiveness of our proposed TOIS, we compared it with other fully-supervised methods through intra-dataset and cross-dataset evaluations. We also performed ablation studies in these two settings to show the effect of each component. Moreover, we apply the proposed cross-task consistency loss for semi-supervised learning and test our method on the UVO validation set.
4.1 IMPLEMENTATION DETAILS AND EVALUATION METRICS
Implementation details Detectron2 Wu et al. (2019) is used to implement the proposed TOIS framework, multi-scale feature maps are extracted from the ResNet-50 He et al. (2016) or Swin Transformer Liu et al. (2021) model pre-trained on ImageNet Deng et al. (2009). Our transformer encoder-decoder design follows the same architecture as in Mask2Former Cheng et al. (2022). The number of object queries M is set to 100. Both the ResNet and Swin backbones use an initial learning rate of 0.0001 and a weight decay of 0.05. A simple data augmentation method, Cutout DeVries & Taylor (2017), is applied to the training data. All the experiments have been done on 8 NVIDIA V100 GPU cards with 32G memory.
Pseudo-labeling for COCO train set Pseudo-labeling is a common way to handle incomplete annotations. To explore the compatibility of our method and the pseudo-labeling operation, we employ a simple strategy to generate pseudo-labels for unannotated instances in the COCO train set Lin et al. (2014) in our experiments. Specifically, we follow a typical self-training framework, introducing the teacher model and student model framework to generate pseudo-labels. These two models have the same architecture, as shown in Figure 2, but are different in model weights. The weights of the student model are optimized by the common back-propagation, while the weight of the teacher model is updated by computing the exponential moving averages (EMA) of the student model. During training, the image i is first fed into the teacher model to generate some mask predictions. The prediction whose confidence is higher than a certain value would be taken as a pseudo-proposal. The state Sij of the pseudo-proposal pij is determined according to Equation (8).
Sij = { True, if argmax(φ(pij , gi)) ⩽ ε, False, otherwise,
(8)
in which gi means any ground truth instance in the image i. φ denotes the IOU calculating function,
and ε is a threshold to further filter the unreliable pseudo-proposals. Finally, pseudo-proposals with states True would be considered as reliable pseudo-labels. Here, the confidence and IOU threshold ε for selecting pseudo-labels are set to 0.8 and 0.2, respectively. Then, we jointly use the ground truth and the pseudo-labels to form the training data annotations. If a region is identified as belonging to an instance in the pseudo-label, it will be considered as a positive sample during training.
Evaluation metrics The Mean Average Recall (AR) and Mean Average Precision (AP) Lin et al. (2014) are utilized to measure the performance of approaches in a class-agnostic way.
4.2 FULLY-SUPERVISED EXPERIMENTAL SETTING
Intra-dataset evaluation UVO is the largest open-world instance segmentation dataset. Its training and test images are from the same domain, while they do not have any overlap. Here, we perform the learning process of TOIS on the UVO-train subset and conduct the test experiments on the UVO-val subset. Besides, we split the COCO dataset into 20 seen (VOC) classes and 60 unseen (none-VOC) classes. We train a model only on the annotation of 20 VOC classes
and test it on the 60 none-VOC class, evaluating its ability of discovering novel objects.
Cross-dataset evaluation Open-world setting assumes that the instance can be novel classes in the target domain. Therefore, it is essential for the OWIS method to handle the potential domain gap with excellent generalization ability. Cross-dataset evaluation, in which training and test data come from different domains, is necessary to be conducted. Here, we first train the proposed TOIS model and compare methods on the COCO-train subset, while testing them on the UVO-val dataset to evaluate their generalizability. Then we extend the experiments to an autonomous driving scenario, training the models on the Cityscapes Cordts et al. (2016) dataset and evaluating them on the Mapillary Neuhold et al. (2017). Cityscapes have 8 foreground classes, while Mapillary contains 35 foreground classes including vehicles, animals, trash can, mailbox, etc.
4.3 FULLY-SUPERVISED EXPERIMENTAL RESULTS
Intra-dataset evaluation The results are illustrated in Table 1. The single-stage approaches based on the mask classification framework perform better than other two-stage methods. Among them, our proposed TOIS achieves a significant performance improvement over the Mask2Former baseline, which is 4.75% in AP100 and 3.93% in AR100 when using the Swin-B backbone. For VOC→none-VOC setting, the ex-
perimental results are shown in Table 2, which verified that our proposed method can improve the performance for all instances, especially large ones.
Cross-dataset evaluation For the COCO→UVO task, according to Table 3, it is clear that the proposed TOIS outperforms all previous methods, achieving a new state-of-the-art AR100 at 54.86% which is 11.56% higher than previous state-of-the-art method GGN Wang et al. (2022). We also applied the proposed techniques to another classic one-stage method SOLO V2 Wang et al. (2020b). The experimental results in Table 4 show that it improves AR100 and AP100 by 3.11% and 2.79% compared to SOLO V2. For the Cityscapes→ Mapillary task, the overall AP and AR of TOIS still surpass the performance of other state-of-the-art methods ( in Table 5 ), which demonstrates the
Table 4: Results of TOIS with SOLOV2 structure ( UVO-train→ UVO-val).
Table 5: Cross-dataset evaluation on autonomous driving scenes. Results of Cityscapes → Mapillary.
effectiveness of our proposed techniques. We show some of the COCO→UVO visualization results in Figure 4 to qualitatively demonstrate the superiority of our method. Please refer to the supplementary material for more qualitative examples.
4.4 ABLATION STUDY
We perform cross-dataset and intra-dataset ablation studies to analyze the effectiveness of each component in the proposed TOIS, including the foreground prediction branch and the cross-task consistency loss. We also try combinations of the pseudo-label generation strategy and our cross-task consistency loss to investigate the individual and synergetic effects of them. Using the SwinB backbone, these models are trained on the COCO-train subset and the UVO-train subset, respectively. The metrics reported in Table 6 are tested on the UVO-val dataset.
Table 6: Ablation results of the proposed components by cross-dataset and intra-dataset evaluations. Foreground prediction (FP), Cross-task consistency (CTC) loss, Pseudo label (PL).
Component Train on COCO Train on UVO FP CTC loss PL AP100(%) AR100(%) AP100(%) AR100(%) 28.65 51.54 35.12 51.39 ✓ 29.02 51.60 35.55 51.73 ✓ 30.09 52.97 32.94 51.64 ✓ ✓ 31.39 53.83 38.02 54.74 ✓ ✓ 30.17 52.98 33.35 50.90 ✓ ✓ ✓ 32.21 54.86 37.71 52.27
Effectiveness foreground prediction branch Table 6 shows that although a separate foreground prediction branch can guide the method to optimize towards the direction of discovering foreground pixels, it only slightly boosts the performance.
Effectiveness of cross-task consistency loss Cross-task consistency loss has a positive effect on both sparse annotated (COCO) and dense annotated (UVO) training dataset. The values of AP100 and AR100 increase significantly ( 2.74% ↑ and 2.49% ↑ on COCO while 2.90% ↑ and 3.35%↑ on UVO) after applying the crosstask consistency loss as well as the foreground prediction branches together. This result outperforms the TOIS counterpart with only pseudo-labeling, showing our effectiveness. In addition, jointly utilizing our cross-task consistency loss as well as the pseudo-labeling strategy leads to performance improvements on two settings, which demonstrates the synergistic effect of both approaches.
Effectiveness of pseudo-labeling Pseudo-labeling is not always necessary and powerful for any types of datasets. As shown in Table 6, the AP100 and AR100 of the COCO trained model increase by 1.35% and 0.78%, respectively, after applying the pseudo-label generation. However, pseudolabeling causes a performance degradation (e.g. 2.18%↓ in AP100) to a model trained in the UVO dataset. Compared with COCO, the UVO dataset is annotated more densely. We conjecture that the
background annotations of UVO are more reliable than those of COCO, where carefully selected pseudo-labels are more likely to represent unlabeled objects. The generated pseudo-labels of UVO contain higher noises than those of COCO. These additional noisy labels mislead the model training.
4.5 SEMI-SUPERVISED LEARNING EXPERIMENT
Experimental setting We have divided the UVO-train dataset into the labeled subset Dl and the unlabeled subset Du. Semi-supervised model Semi-TOIS is optimized as described in Section 3.5 on Dl ∪DU , while the fully-supervised method Labelled-Only-TOIS is trained merely on the DL. To ensure the comprehensiveness of the experiments, two different data division settings are included in our experiments:{DL=30%, Du=70%} and {DL=50%, Du=50%}. The backbone applied here is Swin-B. We also implemented the classic Mean teacher model and a simple pseudo-label method based on the Mask2Former to perform comparison. Note that our semi-supervised setting is different from that in some previous work, like in MaskXRCNNHu et al. (2018) and ShapemaskKuo et al. (2019). They assume that the training set C = A ∪B, where examples from the categories in A have masks, while those in B have only bounding boxes.
Results and analyss As presented in Figure 5, the Semi-TOIS50 model trained on the UVO with 50% annotated data outperforms the Semi-TOIS30 model learning with 30% labeled training images. However, the performance increase between the Semi-TOIS30 and Semi-TOIS50 is slight. In addition, Semi-TOIS30 improves Labelled-Only-TOIS30 by 3.36% and 5.33% in AP100 and AR100, respectively. Compared to Labelled-Only-TOIS50, Semi-TOIS50 still achieves significant advantages (2.36% in AP100 and 6.12% in AR100). These results reflect that cross-task consistency loss has the ability to extract information from unlabeled data and facilitates model optimization in the semi-supervised setting. It is notable that the results of Semi-TOIS30 are even better than those of LabelledOnly-TOIS50. This illustrates that the information dug out by the cross-task consistency loss from the remaining 70% unlabeled data is more abundant than that included in 20% fully-labeled data. Therefore, our algorithm can achieve better performance with fewer
annotations. This characteristic is promising in solving the OWIS problem. In addition, we also compared the semi-TOIS with classic semi-supervised method and recent end to end segmentation method. The results in Table 7 and 8 show our advantages over the compared methods.
5 CONCLUSION
This paper proposes the first transformer-based framework (TOIS) for the open-world instance segmentation task. Apart from predicting the instance mask and objectness score, our framework introduces a foreground prediction branch to segment the regions belonging to any instance. Utilizing the outputs of this branch, we propose a novel cross-task consistency loss to enforce the foreground prediction to be consistent with the prediction of the instance masks. We experimentally demonstrate that this mechanism alleviates the problem of incomplete annotation, which is a critical issue for open-world segmentation. Our extensive experiments demonstrate that TOIS outperforms state-ofthe-art methods by a large margin on typical datasets. We further demonstrate that our cross-task consistency loss can utilize unlabeled images to obtain some performance gains for a semi-supervised instance segmentation. This is an important step toward reducing laborious and expensive human annotation.
A APPENDIX
Please refer to the additional supplement material. | 1. What is the main contribution of the paper in open-world instance segmentation?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and comparisons with other works?
3. Do you have any concerns about the effectiveness of the cross-task consistency loss in handling noisy annotations and separating neighboring occlusion objects?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any missing comparisons or discussions with existing semi-supervised instance segmentation methods and two-stage works on closed-world instance segmentation? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes an open world instance segmentation framework, which contains three stages: 1) it proposes an auxiliary task of predicting total foreground on the basis of existing query-based mask2former model. 2) it includes a cross-task consistency loss and improve results by correcting outputs between two tasks. 3) the paper also designs a way of adding pseudo labels on COCO, which gain some improvement on open-world settings, including semi-supervised setting.
Strengths And Weaknesses
Pros:
The paper proposes a single-stage open-world Instance segmentation framework based on Mask2Former.
Adding an auxiliary task and a consistency loss for self-supervised learning, which improves a lot on both UVO-to-UVO and COCO-to-UVO settings.
Experiments demonstrate the semi-supervised framework can achieve remarkable performance with much smaller amount of labeled data.
Cons:
Although the paper proposed a new constraint called “cross-task consistency loss” for OWIS problem, it lacks of sufficient validation by either theoretical interpretation or experimental evidence how the cross-task consistency loss works for providing an error-correcting mechanism for handling noisy annotations. How does the “cross-task consistency loss” help to handle and separate neighboring occlusion objects?
The tech novelty of the paper is weak and incremental. Being first to adapt Mask2Former to OWIS is an incremental idea. Also, it should be called query-based or transformer-based, instead of one-stage. Can the author give proof on why the auxiliary task of predicting total foreground mask will help novel object detection?
Missing comparison/discussion with existing semi-supervised instance segmentation methods, such as [a, b].
[a] Learning to segment every thing. CVPR, 2018.
[b] Shapemask: Learning to segment novel objects by refining shape priors. ICCV, 2019.
Missing two-stage works on Closed-world instance segmentation, such as [c, d, e].
[c] PointRend: Image Segmentation as Rendering. CVPR, 2019.
[d] Deep occlusion-aware instance segmentation with overlapping bilayers. CVPR, 2021.
[e] Boundary-preserving Mask R-CNN. ECCV, 2020.
Clarity, Quality, Novelty And Reproducibility
Good paper structure. Figure 5 is blurry. |
ICLR | Title
Single-Stage Open-world Instance Segmentation with Cross-task Consistency Regularization
Abstract
Open-World Instance Segmentation (OWIS) is an emerging research topic that aims to segment class-agnostic object instances from images. The mainstream approaches use a two-stage segmentation framework, which first locates the candidate object bounding boxes and then performs instance segmentation. In this work, we instead promote a single-stage transformer-based framework for OWIS. We argue that the end-to-end training process in the single-stage framework can be more convenient for directly regularizing the localization of class-agnostic object pixels. Based on the transformer-based instance segmentation framework, we propose a regularization model to predict foreground pixels and use its relation to instance segmentation to construct a cross-task consistency loss. We show that such a consistency loss could alleviate the problem of incomplete instance annotation – a common problem in the existing OWIS datasets. We also show that the proposed loss lends itself to an effective solution to semi-supervised OWIS that could be considered an extreme case that all object annotations are absent for some images. Our extensive experiments demonstrate that the proposed method achieves impressive results in both fully-supervised and semi-supervised settings. Compared to SOTA methods, the proposed method significantly improves the AP100 score by 4.75% in UVO dataset →UVO dataset setting and 4.05% in COCO dataset →UVO dataset setting. In the case of semi-supervised learning, our model learned with only 30% labeled data, even outperforms its fully-supervised counterpart with 50% labeled data. The code will be released soon.
1 INTRODUCTION
Traditional instance segmentation Lin et al. (2014); Cordts et al. (2016) methods often assume that objects in images can be categorized into a finite set of predefined classes (i.e., closed-world). Such an assumption, however, can be easily violated in many real-world applications, where models will encounter many new object classes that never appeared in the training data. Therefore, researchers recently attempted to tackle the problem of Open-World Instance Segmentation (OWIS) Wang et al. (2021), which targets class-agnostic segmentation of all objects in the image.
Prior to this paper, most existing methods for OWIS are of two-stage Wang et al. (2022); Saito et al. (2021), which detect bounding boxes of objects and then segment them. Despite their promising performances, such a paradigm cannot handle and recover if object bounding boxes are not detected. In contrast, a transformer-based approach called Mask2Former Cheng et al. (2022) has recently been introduced, yet only for closed-world instance segmentation. Based on the Mask2Former, we propose a Transformer-based Open-world Instance Segmentation method named TOIS.
Note that our work is not just a straightforward adaptation of Mask2Former from close-world to open-world. This is because unlike closed-world segmentation, where the object categories can be clearly defined before annotation, the open-world scenario makes it challenging for annotators to label all instances completely or ensure annotation consistency across different images because they cannot have a well-defined finite set of object categories. As shown in Figure 1(a), annotators miss some instances. It still remains challenging that how to handle such incomplete annotations (i.e. some instances missed).
Recent work LDET Saito et al. (2021) addresses this problem by generating synthetic data with a plain background, but based on a decoupled training strategy that can only be used in the two-stage method while our method is of single-stage. Another work called GGN Wang et al. (2022) handles such incomplete instance-level annotation issue by training a pairwise affinity predictor for generating pseudo labels. But training such an additional predictor is complicated and time-consuming.
In contrast, our proposed TOIS method is end-to-end and simpler. We address this incomplete annotation issue via a novel regularization module, which is simple yet effective. Specifically, it is convenient to concurrently predict not only (1) instance masks but also a (2) foreground map. Ideally, as shown in Figure 1(b), the foreground region should be consistent with the union of all instance masks. To penalize their inconsistency, we devise a cross-task consistency loss, which can down-weight the adverse effects caused by incomplete annotation. This is because when an instance is missed in annotation, as long as it is captured by both our predictions of instance masks and foreground map, the consistency loss would be low and hence encourage such prediction. Experiments in Figure 1(g) show that such consistency loss is effective even when annotations miss many instances. And as in Figure 1(c), novel objects which are unannotated in training set have been segmented successfully by our method.
So far, like most existing methods, we focus on the fully-supervised OWIS. In this paper, we further extend OWIS to the semi-supervised setting, where some training images do not have any annotations at all. This is of great interest because annotating segmentation map is very costly. Notably, our proposed regularization module can also benefit semi-supervised OWIS – consider an unlabeled image as an extreme case of incomplete annotation where all of the instance annotations are missed. Specifically, we perform semi-supervised OWIS by first warming up the network on the labeled set and then continuing training it with the cross-task consistency loss on the mixture of labeled and unlabeled images.
Contributions. In a nutshell, our main contributions could be summarized as:
1. Building upon a recently-proposed close-world segmentation method of Mask2Former, we propose a Transformer-based Open-world Instance Segmentation (TOIS) method.
2. We propose a novel cross-task consistency loss that mitigates the issue of incomplete mask annotations, which is a critical issue for open-world segmentation in particular.
3. We further extend the proposed method into a semi-supervised OWIS model, which effectively makes use of the unlabeled images to help the OWIS model training .
4. Our extensive experiments demonstrate that the proposed method reaches the leading OWIS performance in the fully-supervised learning. (Figure 1(d-f)), and that our semi-supervised extension can achieve remarkable performance with a much smaller amount of labeled data.
2 RELATED WORK
Closed-world instance segmentation (CWIS) He et al. (2017); Chen et al. (2020); Dai et al. (2016); Bolya et al. (2019); Wang et al. (2020a) requires the approaches to assign a class label and instance ID to every pixel. Two-stage CWIS approaches, such as MaskRCNN, always include a bounding box estimation branch and a FCN-based mask segmentation branch, working in a ’detectthen-segment’ way. To improve efficiency, one-stage methods such as CenterMask Dai et al. (2016), YOLACT Bolya et al. (2019) and BlendMask Chen et al. (2020) have been proposed, which remove the proposal generation and feature grouping process. To further free the CWIS from the local box detection, Wang et.al Wang et al. (2020a) proposed SOLO and obtained on par results to the above methods. In recent years, the methods Fang et al. (2021); Dong et al. (2021), following DETR Carion et al. (2020), consider the instance segmentation task as an ensemble prediction problem. In addition, Cheng et al. proposed an universal segmentation framework MaskFormer Cheng et al. (2021) and its upgrade version Mask2Former Cheng et al. (2022), which even outperforms the state-of-the-art architectures specifically designed for the CWIS task.
Notably, two-stage method CenterMask preserves pixel alignment and separates the object simultaneously by integrating the local and global branch. Although introducing the global information in this way helps improve the mask quality in CWIS, it cannot handle the open-world task very well, because CenterMask multiplies the local shape and the cropped saliency map to form the final mask for each instance. There is no separate loss for the local shape and global saliency. When such a method faces the incomplete annotations in OWIS tasks, the generated mask predictions corresponding to the unlabeled instances would still be punished during training, making it difficult to discover novel objects at inference. The efficient way to jointly take advantages of global and local information in OWIS tasks deserves to be explored.
Open-world instance segmentation OWIS task Wang et al. (2021) here focuses on the following aspects: (1) All instances (without stuff) have to be segmented; (2) Class-agnostic pixel-level results should be predicted with only instance ID and incremental learning ability is unnecessary. Several OWIS works have recently been developed. Yu et al. Du et al. (2021) proposed a twostage segmentation algorithm, which decoupled the segmentation and detection modules during training and testing. This algorithm achieves competitive results on the UVO dataset thanks to the abundant training data and the introduction of effective modules such as cascade RPN Vu et al. (2019), SimOTA Ge et al. (2021), etc. Another work named LDET Saito et al. (2021) attempts to solve the instance-level incomplete annotation problem. Specifically, LDET first generates the background of the synthesized image by taking a small piece of background in the original image and enlarging it to the same size as the original image. The instance is then matted to the foreground of the synthesized image. The synthesized data is used only to train the mask prediction branch, and the rest of the branches are still trained with the original data. Meanwhile, Wang et al. proposed GGN Wang et al. (2022), an algorithm that combines top-down and bottom-up segmentation ideas to improve prediction accuracy by generating high-quality pseudo-labels. Specifically, a Pairwise Affinity (PA) predictor is trained first, and a grouping module is used to extract and rank segments from predicted PA to generate pseudo-labels, which would be fused with groundtruth to train the segmentation model.
3 METHODOLOGY
In this section, we first define the OWIS problem with both fully and semi-supervised learning. Then the architecture of our TOIS and the proposed cross-task consistency loss are introduced in Section
3.3 and 3.4, respectively. Finally, Section 3.4 and 3.5 show how to optimize the TOIS in fully and semi-supervised way, respectively.
3.1 PROBLEM DEFINITION OF OWIS
The open-world instance segmentation (OWIS) aims to segment all the object instances (things) of any class including those that did not appear in the training phase. Technically, OWIS is a task to produce a set of binary masks, where each mask corresponds to a class-agnostic instance. The pixel value of 1 in the mask indicates a part of an object instance while 0 indicates not.
3.2 MODEL ARCHITECTURE
Our proposed TOIS framework consists of three branches to alleviate the incomplete annotating, as shown in Figure 2. Basically, we follow the design of one-stage Mask2Former Cheng et al. (2022). The objectness prediction branch estimates the weighting score for each mask by applying a sequential Transformer decoder and MLP. The mask prediction branch predicts the binary mask for each instance. It first generates N binary masks with N ideally larger than the actual instance number Ki, which is the number of annotated object instances in a given image i. Each mask is multiplied by a weighting score with a value between 0 and 1, indicating if a mask should be selected as an instance mask. This process generates the mask in an end to end way, which avoids to miss the instance because of poor detection bounding boxes and meanwhile reduces the redundant segmentation cost for each proposal. We refer to Cheng et al. (2021; 2022) for more details.
The foreground prediction branch is a light-weight fully convolutional network to estimate the foreground regions that belong to any object instance.The more detailed design of the foreground prediction branch is in the Appendix. This guides the training of the mask branch through our cross-task consistency loss proposed in the following Sec. 3.3. Once training is done, we discard this branch and only use the objectness and mask prediction branch at inference time. Therefore, we would not introduce any additional parameter or computational redundancy, which benefits the running efficiency.
3.3 LEARNING WITH THE CROSS-TASK CONSISTENCY REGULARIZATION
A critical limitation of the OWIS is the never-perfect annotations due to the difficulties in annotating class-agnostic object instances. Towards alleviating this issue, we propose a regularization to provide extra supervision to guide the OWIS model training under incomplete annotations.
We construct a branch to predict the foreground regions that belong to any of the object instances. Formally we create the the foreground annotation G(x, y) calculated by
G(x, y) =
{ 0, if ∑K i=1 g i(x, y) == 0
1, otherwise, (1)
where gi(x, y) is one of the K annotated object instances for the current image and the union of gi defines the foreground object regions. Here (x, y) denotes a coordinate of a pixel in the an image. We use G(x, y) as labels to train the foreground prediction branch.
Our consistency loss encourages the model outputs to have the relationship indicated in Eq 1, which states that the the foreground prediction should be the union of instance predictions. To do so, we use the following equation as an estimate of the foreground from the instance prediction:
Ĝ(x, y) = Φ K∑ j=1 mj(x, y) , (2) where mj means the confidence of pixels in j-th predicted mask, and Φ represents the Sigmoid function. Then, let the foreground prediction from the foreground prediction branch be F , our cross-task consistency loss is to make F and Ĝ(x, y) consistent, which finally leads to the following loss function.
Lc = DICE(Ĝ,F ) + BCE(Ĝ,F ), (3)
where DICE and BCE denote the dice-coefficient loss Milletari et al. (2016) and binary cross-entropy loss, respectively.
Consistency loss enjoys the following appealing properties. It is self-calibrated and independent with the incompleteness level of labels. As shown in Figure 3, for a instance mistakenly annotated as background, but the foreground prediction branch and mask prediction branch both correctly find it, the model would be punished through mask loss and foreground loss. However, the consistency loss thinks this prediction is correct. In this way, consistency loss down-weights the adverse effects caused by other unreliable segmentation losses. The mitigation and the compensation factor synergize to relieve the overwhelming punishments on unlabeled instances.
3.4 FULLY-SUPERVISED LEARNING
The overall fully-supervised optimization of the proposed TOIS is carried out by minimizing the following joint loss formulation Lf ,
Lf = αLm + βLp + γLc + ωLo, (4) where Lm = BCE(m,g) + DICE(m,g), (5)
Lp = BCE(F,G) + DICE(F,G), (6) Lo = BCE(s,v), (7)
where Lm, Lp and Lo denote the loss terms for mask prediction, foreground prediction, and objectness scoring, respectively. α, β, γ and ω are the weights of the corresponding losses. m and g represent the predicted masks and corresponding groundtruth, respectively. F and G is the foreground prediction result and the generated foreground groundtruth, while the estimated objectness score is denoted with s. v is a set of binary values that indicate whether each mask is an instance. Before computing the Lm, matching between the set of predicted masks and groundtruth has been done via the bipartite matching algorithm defined in Cheng et al. (2022).
3.5 EXTENSION TO SEMI-SUPERVISED LEARNING
Due to the ambiguity of the instance definition in OWIS, it is much harder for the annotators to follow the annotation instruction, and this could make the annotations for OWIS expensive. It is desirable if we can use unlabeled data to help train OWIS models. In this regard, our proposed cross-task consistency loss only requires the outputs of both predictors to have a consistent relationship indicated in Eq1, and does not always need ground truth annotations. Thus, we apply this loss to unlabeled data, which becomes semi-supervised learning. Specifically, the easier-to-learn foreground prediction branch is able to learn well through a few labeled images in the warm-up stage. Then the resulted foreground map can serve as a constraint to optimize the open-world mask predictions with the help of our cross-task consistency loss, when the labels do not exist. In this way, our Semi-TOIS achieves a good trade-off between the annotation cost and model accuracy.
Semi-supervised learning process. Given a labeled set Dl = {(xi, yi)}Nli=1 and an unlabeled set Du = {xi}Nui=1, our goal is to train an OWIS model by leveraging both a large amount of unlabeled data and a smaller set of labeled data. Specifically, we initially use Dl to train the TOIS as a warm-up stage, giving a good initialization for the model. We then jointly train the OWIS model on the both labeled and unlabeled data. For the labeled data, we employ the loss function defined in Eq 4. For the unlabeled data, we apply only the cross-task consistency loss Lc.
4 EXPERIMENTS
For demonstrating the effectiveness of our proposed TOIS, we compared it with other fully-supervised methods through intra-dataset and cross-dataset evaluations. We also performed ablation studies in these two settings to show the effect of each component. Moreover, we apply the proposed cross-task consistency loss for semi-supervised learning and test our method on the UVO validation set.
4.1 IMPLEMENTATION DETAILS AND EVALUATION METRICS
Implementation details Detectron2 Wu et al. (2019) is used to implement the proposed TOIS framework, multi-scale feature maps are extracted from the ResNet-50 He et al. (2016) or Swin Transformer Liu et al. (2021) model pre-trained on ImageNet Deng et al. (2009). Our transformer encoder-decoder design follows the same architecture as in Mask2Former Cheng et al. (2022). The number of object queries M is set to 100. Both the ResNet and Swin backbones use an initial learning rate of 0.0001 and a weight decay of 0.05. A simple data augmentation method, Cutout DeVries & Taylor (2017), is applied to the training data. All the experiments have been done on 8 NVIDIA V100 GPU cards with 32G memory.
Pseudo-labeling for COCO train set Pseudo-labeling is a common way to handle incomplete annotations. To explore the compatibility of our method and the pseudo-labeling operation, we employ a simple strategy to generate pseudo-labels for unannotated instances in the COCO train set Lin et al. (2014) in our experiments. Specifically, we follow a typical self-training framework, introducing the teacher model and student model framework to generate pseudo-labels. These two models have the same architecture, as shown in Figure 2, but are different in model weights. The weights of the student model are optimized by the common back-propagation, while the weight of the teacher model is updated by computing the exponential moving averages (EMA) of the student model. During training, the image i is first fed into the teacher model to generate some mask predictions. The prediction whose confidence is higher than a certain value would be taken as a pseudo-proposal. The state Sij of the pseudo-proposal pij is determined according to Equation (8).
Sij = { True, if argmax(φ(pij , gi)) ⩽ ε, False, otherwise,
(8)
in which gi means any ground truth instance in the image i. φ denotes the IOU calculating function,
and ε is a threshold to further filter the unreliable pseudo-proposals. Finally, pseudo-proposals with states True would be considered as reliable pseudo-labels. Here, the confidence and IOU threshold ε for selecting pseudo-labels are set to 0.8 and 0.2, respectively. Then, we jointly use the ground truth and the pseudo-labels to form the training data annotations. If a region is identified as belonging to an instance in the pseudo-label, it will be considered as a positive sample during training.
Evaluation metrics The Mean Average Recall (AR) and Mean Average Precision (AP) Lin et al. (2014) are utilized to measure the performance of approaches in a class-agnostic way.
4.2 FULLY-SUPERVISED EXPERIMENTAL SETTING
Intra-dataset evaluation UVO is the largest open-world instance segmentation dataset. Its training and test images are from the same domain, while they do not have any overlap. Here, we perform the learning process of TOIS on the UVO-train subset and conduct the test experiments on the UVO-val subset. Besides, we split the COCO dataset into 20 seen (VOC) classes and 60 unseen (none-VOC) classes. We train a model only on the annotation of 20 VOC classes
and test it on the 60 none-VOC class, evaluating its ability of discovering novel objects.
Cross-dataset evaluation Open-world setting assumes that the instance can be novel classes in the target domain. Therefore, it is essential for the OWIS method to handle the potential domain gap with excellent generalization ability. Cross-dataset evaluation, in which training and test data come from different domains, is necessary to be conducted. Here, we first train the proposed TOIS model and compare methods on the COCO-train subset, while testing them on the UVO-val dataset to evaluate their generalizability. Then we extend the experiments to an autonomous driving scenario, training the models on the Cityscapes Cordts et al. (2016) dataset and evaluating them on the Mapillary Neuhold et al. (2017). Cityscapes have 8 foreground classes, while Mapillary contains 35 foreground classes including vehicles, animals, trash can, mailbox, etc.
4.3 FULLY-SUPERVISED EXPERIMENTAL RESULTS
Intra-dataset evaluation The results are illustrated in Table 1. The single-stage approaches based on the mask classification framework perform better than other two-stage methods. Among them, our proposed TOIS achieves a significant performance improvement over the Mask2Former baseline, which is 4.75% in AP100 and 3.93% in AR100 when using the Swin-B backbone. For VOC→none-VOC setting, the ex-
perimental results are shown in Table 2, which verified that our proposed method can improve the performance for all instances, especially large ones.
Cross-dataset evaluation For the COCO→UVO task, according to Table 3, it is clear that the proposed TOIS outperforms all previous methods, achieving a new state-of-the-art AR100 at 54.86% which is 11.56% higher than previous state-of-the-art method GGN Wang et al. (2022). We also applied the proposed techniques to another classic one-stage method SOLO V2 Wang et al. (2020b). The experimental results in Table 4 show that it improves AR100 and AP100 by 3.11% and 2.79% compared to SOLO V2. For the Cityscapes→ Mapillary task, the overall AP and AR of TOIS still surpass the performance of other state-of-the-art methods ( in Table 5 ), which demonstrates the
Table 4: Results of TOIS with SOLOV2 structure ( UVO-train→ UVO-val).
Table 5: Cross-dataset evaluation on autonomous driving scenes. Results of Cityscapes → Mapillary.
effectiveness of our proposed techniques. We show some of the COCO→UVO visualization results in Figure 4 to qualitatively demonstrate the superiority of our method. Please refer to the supplementary material for more qualitative examples.
4.4 ABLATION STUDY
We perform cross-dataset and intra-dataset ablation studies to analyze the effectiveness of each component in the proposed TOIS, including the foreground prediction branch and the cross-task consistency loss. We also try combinations of the pseudo-label generation strategy and our cross-task consistency loss to investigate the individual and synergetic effects of them. Using the SwinB backbone, these models are trained on the COCO-train subset and the UVO-train subset, respectively. The metrics reported in Table 6 are tested on the UVO-val dataset.
Table 6: Ablation results of the proposed components by cross-dataset and intra-dataset evaluations. Foreground prediction (FP), Cross-task consistency (CTC) loss, Pseudo label (PL).
Component Train on COCO Train on UVO FP CTC loss PL AP100(%) AR100(%) AP100(%) AR100(%) 28.65 51.54 35.12 51.39 ✓ 29.02 51.60 35.55 51.73 ✓ 30.09 52.97 32.94 51.64 ✓ ✓ 31.39 53.83 38.02 54.74 ✓ ✓ 30.17 52.98 33.35 50.90 ✓ ✓ ✓ 32.21 54.86 37.71 52.27
Effectiveness foreground prediction branch Table 6 shows that although a separate foreground prediction branch can guide the method to optimize towards the direction of discovering foreground pixels, it only slightly boosts the performance.
Effectiveness of cross-task consistency loss Cross-task consistency loss has a positive effect on both sparse annotated (COCO) and dense annotated (UVO) training dataset. The values of AP100 and AR100 increase significantly ( 2.74% ↑ and 2.49% ↑ on COCO while 2.90% ↑ and 3.35%↑ on UVO) after applying the crosstask consistency loss as well as the foreground prediction branches together. This result outperforms the TOIS counterpart with only pseudo-labeling, showing our effectiveness. In addition, jointly utilizing our cross-task consistency loss as well as the pseudo-labeling strategy leads to performance improvements on two settings, which demonstrates the synergistic effect of both approaches.
Effectiveness of pseudo-labeling Pseudo-labeling is not always necessary and powerful for any types of datasets. As shown in Table 6, the AP100 and AR100 of the COCO trained model increase by 1.35% and 0.78%, respectively, after applying the pseudo-label generation. However, pseudolabeling causes a performance degradation (e.g. 2.18%↓ in AP100) to a model trained in the UVO dataset. Compared with COCO, the UVO dataset is annotated more densely. We conjecture that the
background annotations of UVO are more reliable than those of COCO, where carefully selected pseudo-labels are more likely to represent unlabeled objects. The generated pseudo-labels of UVO contain higher noises than those of COCO. These additional noisy labels mislead the model training.
4.5 SEMI-SUPERVISED LEARNING EXPERIMENT
Experimental setting We have divided the UVO-train dataset into the labeled subset Dl and the unlabeled subset Du. Semi-supervised model Semi-TOIS is optimized as described in Section 3.5 on Dl ∪DU , while the fully-supervised method Labelled-Only-TOIS is trained merely on the DL. To ensure the comprehensiveness of the experiments, two different data division settings are included in our experiments:{DL=30%, Du=70%} and {DL=50%, Du=50%}. The backbone applied here is Swin-B. We also implemented the classic Mean teacher model and a simple pseudo-label method based on the Mask2Former to perform comparison. Note that our semi-supervised setting is different from that in some previous work, like in MaskXRCNNHu et al. (2018) and ShapemaskKuo et al. (2019). They assume that the training set C = A ∪B, where examples from the categories in A have masks, while those in B have only bounding boxes.
Results and analyss As presented in Figure 5, the Semi-TOIS50 model trained on the UVO with 50% annotated data outperforms the Semi-TOIS30 model learning with 30% labeled training images. However, the performance increase between the Semi-TOIS30 and Semi-TOIS50 is slight. In addition, Semi-TOIS30 improves Labelled-Only-TOIS30 by 3.36% and 5.33% in AP100 and AR100, respectively. Compared to Labelled-Only-TOIS50, Semi-TOIS50 still achieves significant advantages (2.36% in AP100 and 6.12% in AR100). These results reflect that cross-task consistency loss has the ability to extract information from unlabeled data and facilitates model optimization in the semi-supervised setting. It is notable that the results of Semi-TOIS30 are even better than those of LabelledOnly-TOIS50. This illustrates that the information dug out by the cross-task consistency loss from the remaining 70% unlabeled data is more abundant than that included in 20% fully-labeled data. Therefore, our algorithm can achieve better performance with fewer
annotations. This characteristic is promising in solving the OWIS problem. In addition, we also compared the semi-TOIS with classic semi-supervised method and recent end to end segmentation method. The results in Table 7 and 8 show our advantages over the compared methods.
5 CONCLUSION
This paper proposes the first transformer-based framework (TOIS) for the open-world instance segmentation task. Apart from predicting the instance mask and objectness score, our framework introduces a foreground prediction branch to segment the regions belonging to any instance. Utilizing the outputs of this branch, we propose a novel cross-task consistency loss to enforce the foreground prediction to be consistent with the prediction of the instance masks. We experimentally demonstrate that this mechanism alleviates the problem of incomplete annotation, which is a critical issue for open-world segmentation. Our extensive experiments demonstrate that TOIS outperforms state-ofthe-art methods by a large margin on typical datasets. We further demonstrate that our cross-task consistency loss can utilize unlabeled images to obtain some performance gains for a semi-supervised instance segmentation. This is an important step toward reducing laborious and expensive human annotation.
A APPENDIX
Please refer to the additional supplement material. | 1. What is the main contribution of the paper regarding open-world instance segmentation?
2. What are the strengths and weaknesses of the proposed method, particularly in its application of the Mask2Former architecture and the design of the foreground prediction branch and cross-task loss?
3. How does the reviewer assess the significance of the results and the novelty of the proposed approach compared to prior works?
4. Are there any concerns or questions regarding the experimental setup, such as the sensitivity of the method's success to choosing the appropriate weight for the Lc loss term or the intuition behind the proposed foreground prediction loss helping in the intra-dataset setting?
5. Are there any minor issues with the paper, such as typos or unclear descriptions, that could be improved upon? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a method for single-stage open-world instance segmentation. It is the first to uses the SOTA Mask2Former architecture for this task. Additionally, to tackle the problem of "missing annotations" while training for open-world segmentations with the limited category labels in the training dataset, the authors propose a foreground prediction branch along with a cross-task consistency loss. The authors further show that since the latter loss does not require any labels, it can also be effectively employed to train networks in a semi-supervised manner. The authors show quantitatively the superior accuracy of their approach in comparison to the existing state-of-the-art approaches on Open-World instance segmentation, on several baseline datasets. Additionally, they also show the contribution of each component of their design via ablation studies and the usefulness of their proposed approach in semi-supervised learning.
Strengths And Weaknesses
Strengths: Novelty: This paper tackles the problem of open-world instance segmentation, which is a largely open, albeit an important problem in computer vision research. If solved it has huge implications in being able to go from successfully segmenting a few hundred categories of objects today to potentially infinite ones in the future. Hence, it is important to further develop this line of research.
In terms of the proposed method itself, there are several novel aspects to it. The authors are the first to apply the SOTA transformer-based Mask2former architecture to the task of open-world instance segmentation. Given the known success of the Mask2former in the closed-world setting it is not surprising that also helps to significantly boost accuracies for the proposed method in the open-world setting. Hence, I consider this as a somewhat incremental contribution. Nevertheless, it is to the authors' credit that they are the first to numerically show this to be true.
The more interesting aspects of the authors' method, however, are the proposed foreground prediction branch and the self-supervised cross-task loss associated with it, which are designed to deal with the problem of "missing annotations" in the COCO dataset. I found this design to be clever, task-specific, conceptually correct and numerically shown to help in boosting the accuracy for open-world instance segmentation.
Significance of results: In absolute terms the proposed method helps to significantly improve the accuracy of the open-world instance segmentation task. That said, the major boost comes from using Mask2Former and less from the other components. Nevertheless, this work is significant in advancing accuracy of this task. Additionally the proposed background prediction module is shown to be multi-purpose. It can be used both for open-world segmentation and for semi-supervised training.
Weaknesses Claims:
The authors claim in the introduction "We propose a Singe-stage Open-world Instance Segmentation (SOIS) for the first time". This is technically not true. The previous GGN work is also a single-stage method based on SOLOv2. In relation to the existing work, the novel contribution of this work is to be the first to apply a transformer-based Mask2Former architecture to the task of open-world instance segmentation. The authors should correct their claims appropriately.
Experiments: 2. How sensitive is the proposed method's success to choosing the appropriate weight for the
L
c
loss term to negate the effect of the network incorrectly penalizing missing detections? Does it need to be set differently for the different datasets?
In Table 1, what is the intuition behind the proposed foreground prediction loss helping in the intra-dataset setting? If it is mostly designed to catch un-annotated detections, how does it help to segment the labeled annotations?
Section 4.5. It would be better to call the Fully-SOIS to "Labelled-Only-SOIS". The term "full" seems to imply that the full training set of OVU was used versus just the labeled 30%/50% set. Related to this point, in figure 5, for completeness the authors should also show the results of training with the full OVU training set, ie., the numbers from Table 6.
Minor points:
Introduction, Para 2: "which detects" --> "which detect"
In Fig.2, the masks don't correspond to the input image at all. Although they are meant to show the concept, nevertheless it would be better to at least show masks that are somewhat representative of the input scene and not those of faces!
In several places, "leaning" --> "learning"
Table 2, clarify what
s
,
m
and
l
mean.
Clarity, Quality, Novelty And Reproducibility
Clarity: Mostly clear, but could be improved further by addressing the points noted in the weaknesses section above.
Quality: Meets the quality standards of an ICLR submission in terms of the technical contribution and presentation.
Novelty: Sufficiently novel in both the proposed method and the fairly open problem addressed.
Reproducibility: The method is well described. The supplementary provides the implementation details. The authors provide the code and have promised to release it. |
ICLR | Title
ONLINE RESTLESS BANDITS WITH UNOBSERVED STATES
Abstract
We study the online restless bandit problem, where each arm evolves according to a Markov chain independently, and the reward of pulling an arm depends on both the current state of the corresponding Markov chain and the action. The agent (decision maker) does not know the transition kernels and reward functions, and cannot observe the states of arms even after pulling. The goal is to sequentially choose which arms to pull so as to maximize the expected cumulative rewards collected. In this paper, we propose TSEETC, a learning algorithm based on Thompson Sampling with Episodic Explore-Then-Commit. The algorithm proceeds in episodes of increasing length and each episode is divided into exploration and exploitation phases. In the exploration phase in each episode, action-reward samples are collected in a round-robin way and then used to update the posterior as a mixture of Dirichlet distributions. At the beginning of the exploitation phase, TSEETC generates a sample from the posterior distribution as true parameters. It then follows the optimal policy for the sampled model for the rest of the episode. We establish the Bayesian regret bound Õ( √ T ) for TSEETC, where T is the time horizon. This is the first bound that is close to the lower bound of restless bandits, especially in an unobserved state setting. We show through simulations that TSEETC outperforms existing algorithms in regret.
N/A
√ T ) for TSEETC, where T is the time
horizon. This is the first bound that is close to the lower bound of restless bandits, especially in an unobserved state setting. We show through simulations that TSEETC outperforms existing algorithms in regret.
1 INTRODUCTION
The restless multi-armed problem (RMAB) is a general setup to model many sequential decision making problems ranging from wireless communication (Tekin & Liu, 2011; Sheng et al., 2014), sensor/machine maintenance (Ahmad et al., 2009; Akbarzadeh & Mahajan, 2021) and healthcare (Mate et al., 2020; 2021). This problem considers one agent and N arms. Each arm i is modulated by a Markov chain M i with state transition function P i and reward function Ri. At each time, the agent decides which arm to pull. After the pulling, all arms undergo an action-dependent Markovian state transition. The goal is to decide which arm to pull to maximize the expected reward, i.e., E[ ∑T t=1 rt], where rt is the reward at time t and T is the time horizon.
In this paper, we consider the online restless bandit problem with unknown parameters (transition functions and reward functions) and unobserved states. Many works concentrate on learning unknown parameters (Liu et al., 2010; 2011; Ortner et al., 2012; Wang et al., 2020; Xiong et al., 2022a;b) while ignoring the possibility that the states are also unknown. The unobserved states assumption is common in real-world applications, such as cache access (Paria & Sinha, 2021) and recommendation system (Peng et al., 2020). In the cache access problem, the user can only get the perceived delay but cannot know whether the requested content is stored in the cache before or after the access. Moreover, in the recommender system, we do not know the user’s preference for the items. There are also some studies that consider the unobserved states. However, they often assume the parameters are known (Mate et al., 2020; Meshram et al., 2018; Akbarzadeh & Mahajan, 2021) and there is a lack of theoretical result (Peng et al., 2020; Hu et al., 2020). And the existing algorithms (Zhou et al., 2021; Jahromi et al., 2022) with theoretical guarantee do not match the lower regret bound of RMAB (Ortner et al., 2012).
One common way to handle the unknown parameters but with observed states is to use the optimism in the face of uncertainty (OFU) principle (Liu et al., 2010; Ortner et al., 2012; Wang et al., 2020). The regret bound in these works is too weak sometimes, because the baseline they consider, such
as pulling the fixed arms (Liu et al., 2010), is not optimal in RMAB problem. Ortner et al. (2012) derives the lower bound Õ( √ T ) for RMAB problem. However, it is not clear whether there is an efficient computational method to search out the optimistic model in the confidence region (Lakshmanan et al., 2015). Another way to estimate the unknown parameters is Thompson Sampling (TS) method (Jung & Tewari, 2019; Jung et al., 2019; Jahromi et al., 2022; Hong et al., 2022). TS algorithm does not need to solve all instances that lie within the confident sets as OFU-based algorithms (Ouyang et al., 2017). What’s more, empirical studies suggest that TS algorithms outperform OFUbased algorithms in bandit and Markov decision process (MDP) problems (Scott, 2010; Chapelle & Li, 2011; Osband & Van Roy, 2017).
Some studies assume that only the states of pulled arms are observable (Mate et al., 2020; Liu & Zhao, 2010; Wang et al., 2020; Jung & Tewari, 2019). They translate the partially observable Markov decision process (POMDP) problem into a fully observable MDP by regarding the state last observed and the time elapsed as a meta-state (Mate et al., 2020; Jung & Tewari, 2019), which is much simpler due to more observations about pulled arms. Mate et al. (2020), and Liu & Zhao (2010) derive the optimal index policy but they assume the known parameters. Restless-UCB in Wang et al. (2020) achieves the regret bound of Õ(T 2/3), which does not match the lower bound Õ( √ T ) regret, and also restricted to a specific Markov model. There are also some works that consider that the arm’s state is not visible even after pulling (Meshram et al., 2018; Akbarzadeh & Mahajan, 2021; Peng et al., 2020; Hu et al., 2020; Zhou et al., 2021; Yemini et al., 2021) and the classic POMDP setting (Jahromi et al., 2022). However, there are still some challenges unresolved. Firstly, Meshram et al. (2018) and Akbarzadeh & Mahajan (2021) study the RMAB problem with unobserved states but with known parameters. However, the true value of the parameters are often unavailable in practice. Secondly, the works study RMAB from a learning perspective, e.g., Peng et al. (2020); Hu et al. (2020) but there are no regret analysis. Thirdly, existing policies with regret bound Õ(T 2/3) (Zhou et al., 2021; Jahromi et al., 2022) often do not have a regret guarantee that scales as Õ( √ T ), which is the lower bound in RMAB problem (Ortner et al., 2012). Yemini et al. (2021) considers the arms are modulated by two unobserved states and with linear reward. This linear structure is quite a bit of side information that the decision maker can take advantage of for decision making and problem-dependent log(T ) is given.
To the best of our knowledge, there are no provably optimal policies that perform close to the offline optimum and match the lower bound in restless bandit, especially in unobserved states setting. The unobserved states bring much challenges to us. Firstly, we need to control estimation error about states, which itself is not directly observed. Secondly, the error depends on the model parameters in a complex way via Bayesian updating and the parameters are still unknown. Thirdly, since the state is not fully observable, the decision-maker cannot keep track of the number of visits to state-action pairs, a quantity that is crucial in the theoretical analysis. We design a learning algorithm TSEETC to estimate these unknown parameters, and benchmarked on a stronger oracle, we show that our algorithm achieves a tighter regret bound. In summary, we make the following contributions:
Problem formulation. We consider the online restless bandit problems with unobserved states and unknown parameters. Compared with Jahromi et al. (2022), our reward functions are unknown.
Algorithmic design. We propose TSEETC, a learning algorithm based on Thompson Sampling with Episodic Explore-Then-Commit. The whole learning horizon is divided into episodes of increasing length. Each episode is split into exploration and exploitation phases. In the exploration phase, to estimate the unknown parameters, we update the posterior distributions about unknown parameters as a mixture of Dirichlet distributions. For the unobserved states, we use the belief state to encode the historical information. In the exploitation phases, we sample the parameters from the posterior distribution and derive an optimal policy based on the sampled parameter. What’s more, we design the determined episode length in an increasing manner to control the total episode number, which is crucial to bound the regret caused by exploration.
Regret analysis. We consider a stronger oracle which solves POMDP based on our belief state. And we define the pseudo-count to store the state-action pairs. Under a Bayesian framework, we show that the expected regret of TSEETC accumulated up to time T is bounded by Õ( √ T ) , where
Õ hides logarithmic factors. This bound improves the existing results (Zhou et al., 2021; Jahromi et al., 2022).
Experiment results. We conduct the proof-of-concept experiments, and compare our policy with existing baseline algorithms. Our results show that TSEETC outperforms existing algorithms and achieve a near-optimal regret bound.
2 RELATED WORK
We review the related works in two main domains: learning algorithm for unknown parameters, and methods to identify unknown states.
Unknown parameters. Since the system parameters are unknown in advance, it is essential to study RMAB problems from a learning perspective. Generally speaking, these works can be divided into two categories: OFU (Ortner et al., 2012; Wang et al., 2020; Xiong et al., 2022a; Zhou et al., 2021; Xiong et al., 2022b) or TS based (Jung et al., 2019; Jung & Tewari, 2019; Jahromi et al., 2022; Hong et al., 2022). The algorithms based on OFU often construct confidence sets for the system parameters at each time, find the optimistic estimator that is associated with the maximum reward, and then select an action based on the optimistic estimator. However, these methods may not perform close to the offline optimum because the baseline policy they consider, such as pulling only one arm, is often a heuristic policy and not optimal. In this case, the regret bound O(log T ) (Liu et al., 2010) is less meaningful. Apart from these works, posterior sampling (Jung & Tewari, 2019; Jung et al., 2019) were used to solve this problem. A TS algorithm generally samples a set of MDP parameters randomly from the posterior distribution, then actions are selected based on the sampled model. Jung & Tewari (2019) and Jung et al. (2019) provide theoretical guarantee Õ( √ T ) in the Bayesian setting. TS algorithms are confirmed to outperform optimistic algorithms in bandit and MDP problems (Scott, 2010; Chapelle & Li, 2011; Osband & Van Roy, 2017).
Unknown states. There are some works that consider the states of the pulled arm are observed (Mate et al., 2020; Liu & Zhao, 2010; Wang et al., 2020; Jung & Tewari, 2019). Mate et al. (2020) and Liu & Zhao (2010) assumes the unobserved states but with known parameters. Wang et al. (2020) constructs an offline instance and give the regret bound Õ(T 2/3). Jung & Tewari (2019) considers the episodic RMAB problems and the regret bound Õ( √ T ) is guaranteed in the Bayesian setting. Some studies assume that the states are unobserved even after pulling. Akbarzadeh & Mahajan (2021) and Meshram et al. (2018) consider the RMAB problem with unknown states but known system parameters. And there is no regret guarantee. Peng et al. (2020) and Hu et al. (2020) consider the unknown parameters but there are also no any theoretical results. The most similar to our work is Zhou et al. (2021) and Jahromi et al. (2022). Zhou et al. (2021) considers that all arms are modulated by a common unobserved Markov Chain. They proposed the estimation method based on spectral method (Anandkumar et al., 2012) and learning algorithm based on upper confidence bound (UCB) strategy (Auer et al., 2002). They also give the regret bound Õ(T 2/3) and there is a gap between the lower bound Õ( √ T ) (Ortner et al., 2012). Jahromi et al. (2022) considers the POMDP setting and propose the pseudo counts to store the state-action pairs. Their learning algorithm is based on Ouyang et al. (2017) and the regret bound is also Õ(T 2/3). And their algorithm is not programmable due to the pseudo counts is conditioned on the true counts which is uncountable.
3 PROBLEM SETTING
Consider a restless bandit problem with one agent and N arms. Each arm i ∈ [N ] := {1, 2, . . . , N} is associated with an independent discrete–time Markov chainMi = (Si, P i), where Si is the state space and P i ∈ RSi×Si the transition functions. Let sit denote the state of arm i at time t and st = (s 1 t , s 2 t , . . . , s N t ) the state of all arms. Each arm i is also associated with a reward functions Ri ∈ RSi×R, where Ri (r | s) is the probability that the agent receives a reward r ∈ R when he pulls arm i in state s. We assume the state spaces Si and the reward set R are finite and known to the agent. The parameters P i and Ri, i ∈ [N ] are unknown, and the state st is also unobserved to the agent. For the sake of notational simplicity, we assume that all arms have the same state spaces S with size S. Our result can be generalized in a straightforward way to allow different state spaces. The whole game is divided into T time steps. The initial state si1 for each arm i ∈ [N ] is drawn from a distribution hi independently, which we assume to be known to the agent. At each time t, the agent chooses one arm at ∈ [N ] to pull and receives a reward rt ∈ R with probability Rat(rt | satt ). Note
that only the pulled arm has the reward feedback. His decision on which arm at to pull is based on the observed history Ht = [a1, r1, a2, r2 · · · , at−1, rt−1]. Note that the states of the arms are never observable, even after pulling. Each arm i makes a state transition independently according to the associated P i, whether it is pulled or not. This process continues until the end of the game. The goal of the agent is to maximize the total expected reward.
We use θ to denote the unknown P i and Ri for i ∈ [N ] collectively. Since the true states are unobservable, the agent maintains a belief state bit = [b i t(s, θ), s ∈ Si] ∈ ∆Si for each arm i, where
bit(s, θ) := P ( sit = s | Ht, θ ) ,
and ∆Si := { b ∈ RSi+ : ∑ s∈Si b(s) = 1 } is the probability simplex in RSi . Note that bit(s, θ) depends on the unknown model parameter θ, which itself has to be learned by the agent. We aggregate all arms as a whole Markov chainM and denote its transition matrix and reward function as P and R, respectively. For a given θ, the overall belief state bt = (b1t , b 2 t , · · · , bNt ) is a sufficient statistic for Ht−1 (Smallwood & Sondik, 1973), so the agent can base his decision at time t on bt only. Let ∆b := ∆S1 × · · · ×∆SN . A deterministic stationary policy π : ∆b → [N ] maps a belief state to an action. The long-term average reward of a policy π is defined as
Jπ(h, θ) := lim sup T→∞
1 T E [ T∑ t=1 rt ∣∣∣ h, θ] . (1) We use J(h, θ) = supπ J
π(h, θ) to denote the optimal long-term average reward. We assume J(h, θ) is independent of the initial distribution h as in Jahromi et al. (2022) and denoted it by J(θ). We make the following assumption. Assumption 1. The smallest element ϵ1 in the transition functions P i, i ∈ N is bigger than zero. Assumption 2. The smallest element ϵ2 in the reward functions Ri, i ∈ N is bigger than zero.
Assumption 1 and Assumption 2 are strong in general, but they help us bound the error of belief estimation (De Castro et al., 2017). Assumption 1 also makes the MDP weakly communicating (Bertsekas et al., 2011). For weakly communicating MDP, it is known that there exists a bounded function v(·, θ) : ∆b → R such that for all b ∈ ∆b (Bertsekas et al., 2011),
J(θ) + v(b, θ) = max a
{ r(b, a) +
∑ r P (r | b, a, θ)v (b′, θ)
} , (2)
where v is the relative value function, r(b, a) = ∑ s ∑ r b a(s, θ)Ra(r | s)r is the expected reward, b′ is the updated belief after obtaining the reward r, and P (r | b, a, θ) is the probability of observing r in the next step, conditioned on the current belief b and action a. The corresponding optimal policy is the maximizer of the right part in equation 2. Since the value function v(, θ) is finite, we can bound the span function sp(θ) := maxb v(b, θ) −minb v(b, θ) as Zhou et al. (2021). We show the details about this bound in Proposition 1 and denote the bound as H .
We consider the Bayesian regret. The parameters θ∗ is randomly generated from a known prior distribution Q at the beginning and then fixed but unknown to the agent. We measure the efficiency of a policy π by its regret, defined as the expected gap between the cumulative reward of an offline oracle and that of π, where the oracle is the optimal policy with the full knowledge of θ∗, but unknown states. The offline oracle is similar to Zhou et al. (2021), which is stronger than those considered in Azizzadenesheli et al. (2016) and Fiez et al. (2018). We focus on the Bayesian regret of policy π (Ouyang et al., 2017; Jung & Tewari, 2019) as follows,
RT := Eθ∗∼Q [ T∑ t=1 (J(θ∗)− rt) ] . (3)
The above expectation is with respect to the prior distribution about θ∗, the randomness in state transitions and the random reward.
4 THE TSEETC ALGORITHM
In section 4.1, we define the belief state and show how to update it with new observation. In section 4.2, we show how to update the posterior distributions under unknown states. In section 4.3, we show the details about our learning algorithm TSEETC.
4.1 BELIEF ENCODER FOR UNOBSERVED STATE
Here we focus on the belief update for arm i with true parameters θ∗. At time t, the belief for arm i in state s is bit(s, θ
∗). Then after the pulling of arm i, we obtain the observation rt. The belief bit(s ′, θ∗) can be update as follows:
bit+1(s ′, θ∗) =
∑ s b i t(s, θ
∗)Ri∗ (rt | s)P i∗(s′ | s)∑ s b i t(s, θ ∗)Ri∗ (rt | s) , (4)
where the P i∗(s ′ | s) is the probability of transitioning from state s at time t to state s′ andRi∗ (rt | s) is the probability of obtain reward rt under state s.
If the arm i is not pulled, we update its belief as follows:
bit+1(s ′, θ∗) = ∑ s bit(s, θ ∗)P i∗(s ′ | s). (5)
Then at each time, we can aggregate the belief of all arms as bt. Based on equation 2 , we can derive the optimal action at for current belief bt.
4.2 MIXTURE OF DIRICHLET DISTRIBUTION
In this section, we estimate the unknown P i and Ri based on Dirichlet distribution. The Dirichlet distribution is parameterized by a count vector, ϕ = (ϕ1, . . . , ϕk), where ϕi ≥ 0, such that the density of probability distribution is defined as f(p | ϕ) ∝ ∏k i=1 p ϕi−1 i (Ghavamzadeh et al., 2015).
Since the true states are unobserved, all state sequences should be considered, with some weight proportional to the likelihood of each state sequence (Ross et al., 2011). Denote the reward history collected from time t1 till t2 for arm i as rit1:t2 and similarly the states history is denoted as s i t1:t2 . And the belief state history is denoted as bit1:t2 . Then with these history information, the posterior distribution gt(P i) and gt(Ri) at time t can be updated as in Lemma 1. Lemma 1. Under the unobserved state setting and assuming transition function P i with prior g0 ( P i ) = f(P
i−ϵ11 1−ϵ1 | ϕ
i) , reward function Ri with prior g0 ( Ri ) = f(R
i−ϵ21 1−ϵ2 | ψ i), with the information ri0:t and b i 0:t , then the posterior distribution are as follows:
gt ( P i ) ∝ ∑ s̄it∈Sti g0 ( P i ) w(si0:t) ∏ s,s′ ( P i(s′ | s)− ϵ1 1− ϵ1 )N i s,s′(s̄ i t)+ϕ i s,s′−1, (6)
gt ( Ri ) ∝ ∑ s̄it∈Sti g0 ( Ri ) w(si0:t) ∏ s,r ( Ri(r | s)− ϵ2 1− ϵ2 )N i s,r(s̄ i t)+ψ i s,r−1. (7)
where w(si0:t) is the likelihood of state sequence s i 0:t and 1 is the vector with one in each position.
The element 1 can be different lengths in correspondence with the dimension of P and R. This procedure is summarized in Algorithm 1.
Algorithm 1 Posterior Update for Ri(s, ·) and P i(s, ·) 1: Input: the history length τ1, the state space Si, the belief history bi0:τ1 , the reward history r i 0:τ1 ,
the initial parameters ϕis,s′ , ψ i s,r, for s, s ′ ∈ Si, r ∈ R, 2: generate Sτ1i possible state sequences 3: calculate the weight w(j) = ∏τ1 t=1 b i t(s, θ), j ∈ S τ1 i 4: for j in 1, . . . ,Sτ1i do 5: count the occurence times of event (s, s′) and (s, r) as N is,s′ , N i s,r in sequence j 6: update ϕis,s′ ← ϕis,s′ +N is,s′ , ψis,r ← ψis,r +N is,r 7: aggregate the ϕis,s′ as ϕ(j), ψ i s,r as ψ(j) for all s, s
′ ∈ Si, r ∈ R 8: end for 9: update the mixture Dirichlet distribution gτ1(P i) ∝ ∑Sτ1i j=1 w(j)f( P i−ϵ11 1−ϵ1 | ϕ(j)),
gτ1(R i) ∝ ∑Sτ1i j=1 w(j)f( Ri−ϵ21 1−ϵ2 | ψ(j))
With Algorithm 1, we can update the posterior distribution about the unknown parameters and sample from the posterior distribution as true parameters. The belief estimation error can be bounded by the distance between the sampled parameters and the true values (Proposition 2 ). The theoretical guarantee about estimation errors about unknown parameters is provided in Lemma 2.
4.3 OUR ALGORITHM
Our algorithm, TSEETC, operates in episodes with different lengths. Each episode is split into exploration phase and exploitation phase. Denote the episode number is KT and the first time in each episode is denoted as tk. We use Tk to denote the length of episode k and it can be determined as: Tk = T1 + k − 1, where T1 = ⌈√ T+1 2 ⌉ . The length of exploration phase in each episode is fixed as τ1 which satisfies τ1KT = O( √ T ) and τ1 ≤ T1+KT−12 . With these notations, our whole algorithm is shown below.
Algorithm 2 Thompson Sampling with Episodic Explore-Then-Commit 1: Input: prior g0(P ),g0(R), initial belief b0, exploration length τ1, the first episode length T1 2: for episode k = 1, 2, . . . , do 3: start the first time of episode k, tk := t 4: generate R(tk) ∼ gtk−1+τ1(R) and P (tk) ∼ gtk−1+τ1(P ) 5: for t = tk, tk + 1, ..., tk + τ1 do 6: pull the arm i for τ1/N times in a round robin way 7: receive the reward rt 8: update the belief bit using R(tk), P (tk) based on equation 4 9: update the belief bjt , j ∈ N \ {i} using P (tk) based on equation 5 10: end for 11: for i = 1, 2, . . . , N do 12: input the obtained rt1:t1+τ1 , ..., rtk:tk+τ1 , bt1:t1+τ1 , ..., btk:tk+τ1 to Algorithm 1 to update the posterior distribution gtk+τ1(P ), gtk+τ1(R) 13: end for 14: generate R(tk + τ1) ∼ gtk+τ1(P ), P (tk + τ1) ∼ gtk+τ1(R) 15: for i in 0, 1, . . . , N do 16: re-update the belief bit from time 0 to tk + τ1 based on R(tk + τ1) and P (tk + τ1) 17: end for 18: compute π∗k(·) = Oracle (·, R(tk + τ1), P (tk + τ1)) 19: for t = tk + τ1 + 1, · · · , tk+1 − 1 do 20: apply action at = π∗k (bt) 21: observe new reward rt+1 22: update the belief bt of all arms based equation 4, equation 5 23: end for 24: end for
In episode k, for the exploration phase, we first sampled the θtk from the distribution gtk−1+τ1(P ) and gtk−1+τ1(R). We pull each arm for τ1/N times in a round robin way. For the pulled arm, we update its belief based on equation 4 using θtk . For the arms that are not pulled, we update its belief based on equation 5 using θtk . The reward and belief history of each arm are input into Algorithm 1 to update the posterior distribution after the exploration phase. Then we sample the new θtk+τ1 from the posterior distribution, and re-calibrate the belief bt based on the most recent estimated θtk+τ1 . Next we enter into the exploitation phase . Firstly we derive the optimal policy πk for the sampled parameter θtk+τ1 . Then we use policy πk for the rest of the episode k.
We control the increasing of episode length in a deterministic manner. Specially, the length for episode k is just one more than the last episode k. In such a deterministic increasing manner, the episode number KT is bounded by O( √ T ) as in Lemma 10. Then the regret caused by the
exploration phases can be bound by O( √ T ), which is an crucial part in Theorem 1.
In TSEETC, for the unknown states, we propose the belief state to estimate the true states. What’s more, under the unobserved state setting, we consider all possible state transitions and update the
posterior distribution of unknown parameters as mixture of each combined distribution, in which each occurence is summed with different weight. Remark 1. We use an Oracle to derive the optimal policy for the sampled parameters in Algorithm 2. The Oracle can be the Bellman equation for POMDP as we introduced in equation 2, or the approximation methods (Pineau et al., 2003; Silver & Veness, 2010) , etc. The approximation error is discussed in Remark 3.
5 PERFORMANCE ANALYSIS
In section 5.1, we show our theoretical results and some discussions. In section 5.2, we provide a proof sketch and the detailed proof is in Appendix B.
5.1 REGRET BOUND AND DISCUSSIONS
Theorem 1. Suppose Assumptions 1,2 hold and the Oracle returns the optimal policy in each episode. The Bayesian regret of our algorithm satisfies
RT ≤ 48C1C2S √ NT log(NT ) + (τ1∆R+H + 4C1C2SN) √ T + C1C2,
where C1 = L1 + L2N +N2 + S2, C2 = rmax +H are constants independent with time horizon T , L1 =
4(1−ϵ1)2 Nϵ21ϵ2 , L2 = 4(1−ϵ1)2 ϵ31 , ϵ1 and ϵ2 are the minimum elements of the functions P ∗ and R∗,
respectively. τ1 is the fixed exploration length in each episode, ∆R is the biggest gap of the reward obtained at each two different time, H is the bounded span, rmax is the maximum reward obtain each time, N is the number of arms and S is the state size for each arm. Remark 2. The Theorem 1 shows that the regret of TSEETC is upper bound by Õ( √ T ). This is the first bound that matches the lower bound in restless bandit problem (Ortner et al., 2012) in such unobserved state setting. Although TSEETC looks similar to explore-then-commit (Lattimore & Szepesvári, 2020), a key novelty of TSEETC lies in using the approach of posterior sampling to update the posterior distribution of unknown parameters as the mixture of each combined distribution. Our algorithm balances exploration and exploitation in a deterministic-episode manner and ensures the episode length grows at a linear rate, which guarantees that the total episode number is bounded by O( √ T ). Therefore the total regret caused by exploration is well controlled by O( √ T ) and this is better than the bound O(T 2/3) in Zhou et al. (2021). What’s more, in the exploitation phase, our regret bound Õ( √ T ) is also better than Õ(T 2/3) (Zhou et al., 2021). This shows our posterior sampling based method is superior to UCB based solution (Osband & Van Roy, 2017). In Jahromi et al. (2022), their pseudo count of state-action pair is always smaller than the true counts with some probability at any time. However, in our algorithm, the sampled parameter is more concentrated on true values with the posterior update. Therefore, our pseudo count (defined in equation 13) based on belief approximates the true counts more closely, which helps us obtain a tighter bound.
5.2 PROOF SKETCH
In our algorithm, the total regret can be decomposed as follows:
RT = Eθ∗ [ kT∑ k=1 tk+τ1∑ tk J(θ∗)− rt ] ︸ ︷︷ ︸
Regret (A)
+Eθ∗ [ kT∑ k=1 tk+1−1∑ tk+τ1+1 J(θ∗)− rt ] ︸ ︷︷ ︸
Regret (B)
. (8)
Bounding Regret (A). The Regret (A) is the regret caused in the exploration phase of each episode. This term can be simply bounded as follows:
Regret (A) ≤ Eθ∗ [ kT∑ k=1 τ1∆R ] ≤ τ1∆RkT (9)
where ∆R = rmax − rmin is the biggest gap of the reward received at each two different times. The regret in equation 9 is related with the episode number kT , which can be bounded asO( √ T ) in Lemma 10.
Bounding Regret (B). Next we bound Regret(B) in the exploitation phase. Define b̂t is the belief updated with parameter θk and b∗t represents the belief with θ
∗. During episode k, based on equation 2 for the sampled parameter θk and that at = π∗(b̂t), we can write:
J (θk) + v(b̂t, θk) = r(b̂t, at) + ∑ r P (r | b̂t, at, θk)v(b′, θk). (10)
With this equation, we proceed by decomposing the regret as:
Regret(B) = R1 +R2 +R3 +R4 (11)
where each term is defined as follows:
R1 = Eθ∗ kT∑ k=1 [(Tk − τ1 − 1) (J(θ∗)− J(θk))] ,
R2 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( v(b̂t+1, θk)− v(b̂t, θk) )] ,
R3 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 (∑ r P [ r | b̂t, at, θk ] v(b′, θk)− v(b̂t+1, θk) )] ,
R4 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( r(b̂t, at)− r(b∗t , at) )] .
Bounding R1. One key property of Posterior Sampling algorithms is that for given the historyHtk , the true parameter θ∗ and sampled θk are identically distributed at the time tk as stated in Lemma 13. Due to the length Tk determined and independent with θk, then R1 is zero thanks to this key property.
Bounding R2. The regret R2 is the telescopic sum of value function and can be bounded as R2 ≤ HKT . It solely depends on the episode number and the upper boundH of span function. As a result, R2 reduce to a finite bound over the number of episodes kT , which can be bounded in Lemma 10.
BoundingR3 andR4. The regret termsR3 andR4 is related with estimation error about θ. Thus we should bound the parameters’ error especially in our unobserved state setting. Recall the definition of ϕ, ψ, we can define the posterior mean of P̂ i(s′ | s) and R̂i(r | s) for arm i at time t as follows:
P̂ i(s′ | s)(t) = ϵ1 + (1− ϵ1)ϕis,s′(t) Sϵ1 + (1− ϵ1) ∥∥ϕis,·(t)∥∥1 , R̂i(r | s)(t) = ϵ2 + (1− ϵ2)ψis,r(t) Sϵ2 + (1− ϵ2) ∥∥ψis,·(t)∥∥1 . (12) We also define the pesudo count of the state-action pair (s, a) before the episode k as
N itk(s, a) = ∥∥ψis,·(tk)∥∥1 − ∥∥ψis,·(0)∥∥1 (13)
where ψis,·(tk) represents the count of state-action z = (s, a) pair before the episode k. Let Mik be the set of plausible MDPs in episode k with reward function R (r | z) and transition function P (s′ | z) satisfying,∑
s′∈S ∣∣∣P (s′ | z)− P̂ ik (s′ | z)∣∣∣ ≤ βk(z), ∑ r∈R ∣∣∣R (r | z)− R̂ik (r | z)∣∣∣ ≤ βk(z), (14) where βik(s, a) := √ 14S log(2NtkT )
max{1,Nitk (s,a)} is chosen conservatively (Auer et al., 2008) so thatMik con-
tains both P i∗ and P i k, R i ∗ and R i k with high probability. P i ∗ and R i ∗ are the true parameters as we defined in section 4.1. Specially, for the unobserved state setting, the belief error under different parameters is upper bounded by the gap between the estimators as in Proposition 2. Then the core of the proofs lies in deriving a high-probability confidence set with our pesudo counts and show that the estimated error accumulated to T for each arm is bounded by √ T . Then with the error bound for each arm, we can derive the final error bound about the MDP aggregated by all arms as stated in Lemma 2 .
Lemma 2. (estimation errors) The total estimation error about transition functions accumulated by all exploitation phases satisfies the following bound
Eθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥P ∗ − Pk∥1 ] ≤ 48SN √ NT log(NT ) + 4SN2 √ T +N. (15)
The Lemma 2 shows that the accumulated error is bounded by O( √ T ), which is crucial to obtain the final bound as the observed-states setting (Ortner et al., 2012; Jung & Tewari, 2019). With C1 = L1 + L2N + N
2 + S2, We show the final bound about R3, R4 and the detailed proof in Appendix B.3,B.4. Lemma 3. R3 satisfies the following bound
R3 ≤ 48C1SH √ NT logNT + 4C1SNH √ T + C1H.
Lemma 4. R4 satisfies the following bound R4 ≤ 48C1Srmax √ NT log(NT ) + 4C1SNrmax √ T + C1rmax.
6 NUMERICAL EXPERIMENTS
In this section, we present proof-of-concept experiments and approximately implement TSEETC . We consider two arms and there are two hidden states for each arm. We pull just one arm each time. The learning horizon T = 50000, and each algorithm runs 100 iterations. The transition functions and reward functions for all arms are the same. We initialize the algorithm with uninformed Dirichlet prior on the unknown parameters. We compare our algorithm with simple heuristics ϵ-greedy (Lattimore & Szepesvári, 2020) (ϵ = 0.01), and Sliding-Window UCB (Garivier & Moulines, 2011) with specified window size, RUCB (Liu et al., 2010), Q-learning (Hu et al., 2020) and SEEU (Zhou et al., 2021). The results are shown in Figure 1. We can find that TSEETC has the minimum regret among these algorithms.
In Figure 2, we plot the cumulative regret versus T of the six algorithms in log-log scale. We observe that the slopes of all algorithms except for our TSEETC and SEEU are close to one, suggesting that they incur linear regrets. What is more, the slope of TSEETC is close to 0.5, which is better than SEEU. This is consistent with our theoretical result.
7 CONCLUSION
In this paper, we consider the restless bandit with unknown states and unknown dynamics. We propose the TSEETC algorithm to estimate these unknown parameters and derive the optimal policy. We also establish the Bayesian regret of our algorithm as Õ( √ T ) which is the first bound that matches the lower bound especially in restless bandit problems with unobserved states . Numerical results validate that the TSEETC algorithm outperforms other learning algorithms in regret. A related open question is whether our method can be applied to the setting where the transition functions are action dependent. We leave it for future research.
A TABLE OF NOTATIONS
Notation Description T The length of horizon KT The episode number of time T Tk The episode length of episode k τ1 The fixed exploration length in each episode P i The transition functions for arm i Ri The reward function for arm i Pk The sampled transition function for aggregated MDP Rk The sampled reward function for aggregated MDP rt The reward obtained at time t
bit(s, θ) The belief state for being in state s at time t for arm i with parameter θ b̂t The belief of all arms at time t with parameter θk b∗t The belief of all arms at time t with parameter θ ∗
at The action at time t J(θk) The optimal long term average reward with parameter θk rmax The maximum reward obtained each time rmin The minimum reward obtained each time ∆R The biggest gap of the obtained reward
B PROOF OF THEOREM 1
Recall that our goal is to minimize the regret : RT := Eθ∗ [ T∑ t=1 (J(θ∗)− rt) ] . (16)
rt depends on the state st and at. Thus rt can be written as r(st, at). Due to Eθ∗ [r (st, at) | Ht−1] = r(b∗t , at) for any t, we have,
RT := Eθ∗ [ T∑ t=1 (J(θ∗)− r(b∗t , at)) ] . (17)
In our algorithm, each episode is split into the exploration and exploitation phase then we can rewrite the regret as:
RT = Eθ∗ [ kT∑ k=1 tk+τ1∑ tk (J(θ∗)− r (b∗t , at)) + kT∑ k=1 tk+1−1∑ tk+τ1+1 (J(θ∗)− r (b∗t , at)) ] , (18)
where τ1 is the exploration length for each episode. τ1 is a constant. tk is the start time of episode k. Define the first part as Regret (A) which is caused by the exploration operations. The another part Regret (B) is as follows.
Regret (A) = Eθ∗ [ kT∑ k=1 tk+τ1∑ tk (J(θ∗)− r (b∗t , at)) ] ,
Regret (B) = Eθ∗ [ kT∑ k=1 tk+1−1∑ tk+τ1+1 (J(θ∗)− r (b∗t , at)) ] .
Recall that the reward set isR and we define the maximum reward gap inR as ∆R = rmax−rmin. Then we get: J(θ∗)− r (b∗t , at) ≤ ∆R. Then Regret (A) can be simply upper bounded as follows:
Regret (A) ≤ Eθ∗ [ kT∑ k=1 τ1∆R ] ≤ τ1∆RkT .
Regret (A) is related with the episode number kT obviously, which is bounded in Lemma 10. Next we should bound the term Regret (B).
During the episode k, based on equation 2, we get: J (θk) + v(b̂t, θk) = r(b̂t, at) + ∑ r P (r | b̂t, at, θk)v (b′, θk) , (19)
where J (θk) is the optimal long-term average reward when the system parameter is θk, b̂t is the belief at time t updated with parameter θk, r(b̂t, at) is the expected reward we can get when the action at is taken for the current belief b̂t, b′ is the updated belief based on equation 4 with parameter θk when the reward r is received.
Using this equation, we proceed by decomposing the regret as:
Regret(B) = R1 +R2 +R3 +R4, (20)
where
R1 = Eθ∗ kT∑ k=1 [(Tk − τ1 − 1) (J(θ∗)− J(θk))] ,
R2 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( v(b̂t+1, θk)− v(b̂t, θk) )] ,
R3 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 (∑ r P (r | b̂t, at, θk)v(b′, θk)− v(b̂t+1, θk) )] ,
R4 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( r(b̂t, at)− r(b∗t , at) )] .
Next we bound the four parts one by one.
B.1 BOUND R1
Lemma 5. R1 satisfies that R1 = 0.
Proof. Recall that:
R1 = Eθ∗ kT∑ k=1 [(Tk − τ1 − 1) (J(θ∗)− J(θk)] .
For each episode, Tk is determined and is independent with θk. Based on Lemma 13, we know that,
Eθ∗ [J(θ∗)] = Eθ∗ [J(θk)].
therefore, the part R1 is 0.
B.2 BOUND R2
Lemma 6. R2 satisfies the following bound
R2 ≤ HKT , where KT is the total number of episodes until time T .
Proof. Recall that R2 is the telescoping sum of value function at time t+ 1 and t.
R2 = Eθ∗ kT∑ k=1
[ tk+1−1∑
t=tk+τ1+1
[ v(b̂t+1, θk)− v(b̂t, θk) ]] . (21)
We consider the whole sum in episode k, then the R2 can be rewrite as:
R2 = Eθ∗ kT∑ k=1 [ v(b̂tk+1 , θk)− v(b̂tk+τ1+1, θk) ] .
Due to the span of v(b, θ) is bounded by H as in proposition 1 , then we can obtain the final bound,
R2 ≤ HKT .
B.3 BOUND R3
In this section, we first rewrite the R3 in section B.3.1. In section B.3.2, we show the details about how to bound R3.
B.3.1 REWRITE R3
Lemma 7. (Rewrite R3 ) The regret R3 can be bounded as follows: R3 ≤ HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||P ∗ − Pk||1 ] +HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||b∗t − b̂t||1 ]
+ S2HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥R∗ −Rk∥1 ] ,
where Pk is the sampled transition functions in episode k, Rk is the sampled reward functions in episode k, b∗t is the belief at time t updated with true P
∗ and R∗, b̂t is the belief at time t updated with sampled Pk, Rk.
Proof. The most part is similar to Jahromi et al. (2022), except that we should handle the unknown reward functions.
Recall that R3 = Eθ∗ ∑kT k=1 [∑tk+1−1 t=tk+τ1+1 (∑ r P (r | b̂t, at, θk)v (b′, θk)− v(b̂t+1, θk) )] .
Recall that Ht is the history of actions and observations prior to action at. Conditioned on Ht, θ∗ and θk, the only random variable in b̂t+1 is rt+1, then we can get,
Eθ∗ [ v(b̂t+1, θk) | Ht, θk ] = ∑ r∈R v (b′, θk)P (r | b∗t , at, θ∗), (22)
where P (r | b∗t , at, θ∗) is the probability of getting reward r given b∗t , at, θ∗. By the law of probability, P (r | b∗t , at, θ∗) can be written as follows,
P (r | b∗t , at, θ∗) = ∑ s′ R∗ (r | s′)P (st+1 = s′ | Ht, θ∗)
= ∑ s′ R∗ (r | s′) ∑ s P ∗ (st+1 = s ′ | st = s,Ht, at, θ∗)P (st = s | Ht, θ∗)
= ∑ s ∑ s′ b∗t (s)P ∗ (s′ | s)R∗ (r | s′) ,
(23) where P ∗ is the transition functions for the MDP aggregated by all arms, R∗ is the reward function for the aggregated MDP. Therefore, we can rewrite the R3 as follows,
R3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r∈R (P (r | b̂t, at, θk)− P (r | b∗t , at, θ∗)v (b′, θk) )] .
Based on equation 23, we get R3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)R ∗ (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )]
= Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )] + Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )] − Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)R ∗ (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )] .
(24)
where Rk is the sampled reward function for aggregated MDP, Pk is the sampled transition function for aggregated MDP.
Define R′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) [∑ s b̂t(s)Pk(s ′ | s)− ∑ s b∗t (s)P ∗(s′ | s) ])] ,
R′′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk) [Rk (r | s′)−R∗ (r | s′)] ∑ s b∗t (s)P ∗(s′ | s) )] .
Bounding R′3. The part R′3 can be bounded as Jahromi et al. (2022). R′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) [∑ s b̂t(s)Pk(s ′ | s)− ∑ s b∗t (s)P ∗(s′ | s)
])] = R′3(0) +R ′ 3(1)
where R′3(0) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)Pk(s ′ | s) )]
R′3(1) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )]
For R′3(0), because ∑ r Rk (r | s′) = 1, ∑ s′ Pk(s ′ | s) = 1,v (b′, θk) ≤ H , we have
R′3(0) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)Pk(s ′ | s) )]
= Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s (b̂t(s)− b∗t (s)Pk(s′ | s) )]
≤ Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s |b̂t(s)− b∗t (s)|Pk(s′ | s) )]
≤ HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s |b̂t(s)− b∗t (s)| )]
= HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∥∥∥b̂t(s)− b∗t (s)∥∥∥ 1 )] ,
where the first inequality is due to b̂t(s) − b∗t (s) ≤ |b̂t(s) − b∗t (s)| and the second inequality is because ∑ r Rk (r | s′) = 1, ∑ s′ Pk(s
′ | s) = 1, v (b′, θk) ≤ H . For the first term inR′3(1) , note that conditioned onHt, θ∗, the distribution of st is b∗t . Furthermore, at is measurable with respect to the sigma algebra generated byHt, θk since at = π∗(b̂t, θk). Thus, we have
Eθ∗ [ v (b′, θk)
∑ s P ∗ (s′ | s) b∗(s) | Ht, θk
] = v (b′, θk)Eθ∗ [P ∗ (s′ | s) | Ht, θk] . (25)
Eθ∗ [ v (b′, θk)
∑ s Pk (s ′ | s) b∗(s) | Ht, θk
] = v (b′, θk)Eθ∗ [Pk (s′ | s) | Ht, θk] . (26)
Substitute equation 25, equation 26 into R′3(1), we have
R′3(1) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) (Pk(s′ | s)− P ∗(s′ | s)) )]
≤ Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) |Pk(s′ | s)− P ∗(s′ | s)| )]
≤ HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s′ |Pk(s′ | s)− P ∗(s′ | s)| )]
≤ HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∥Pk − P ∗∥1) ] ,
where the first inequality is because Pk(s′ | s)− P ∗(s′ | s) ≤ |Pk(s′ | s)− P ∗(s′ | s)|, the second inequality is due to v (b′, θk) ≤ H and ∑ r Rk (r | s′) = 1.
Therefore we obtain the final results,
R′3 ≤ HE [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||P ∗ − Pk||1 ] +HE [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||b∗t − b̂t||1 ] .
Bounding R′′3 . For part R′′3 , note that for any fixed s′, ∑ s b ∗ t (s)P
∗(s′ | s) ≤ S, therefore we can bound R′′3 as follows,
R′′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk) [Rk (r | s′)−R∗ (r | s′)] ∑ s b∗t (s)P ∗(s′ | s) )]
≤ SHEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s′ ∑ r [Rk (r | s′)−R∗ (r | s′)] )]
≤ SHEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 S ∥Rk −R∗∥1 ]
≤ S2HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥Rk −R∗∥1 ] ,
(27) where the first inequality is due to v (b′, θk) ≤ H and ∑ s b ∗ t (s)P
∗(s′ | s) ≤ S , the second inequality is due to for any fixed s′, ∑ r [Rk (r | s′)−R∗ (r | s′)] ≤ ∥Rk −R∗∥1.
B.3.2 BOUND R3
Lemma 8. R3 satisfies the following bound R3 ≤ 48(L1 + L2N +N + S2)SH √ NT log(NT ) + (L1 + L2N +N + S 2)H
+ 4(L1 + L2N +N 2 + S2)SNH(T1 +KT − τ1 − 1).
Proof. Recall that the R3 is as follows:
R3 = Eθ∗ kT∑ k=1
[ tk+1−1∑
t=tk+τ1+1
(∑ r P [r | b̂t, at, θk]v (b′, θk)− v(b̂t+1, θk) )] .
This regret terms are dealing with the model estimation errors. That is to say, they depend on the on-policy error between the sampled transition functions and the true transition functions, the sampled reward functions and the true reward functions. Thus we should bound the parameters’ error especially in our unobserved state setting. Based on the parameters in our Dirichlet distribution, we can define the empirical estimation of reward function and transition functions for arm i as follows:
P̂ i(s′ | s)(t) = ϵ1 + (1− ϵ1)ϕis,s′(t) Sϵ1 + (1− ϵ1) ∥∥ϕis,·(t)∥∥1 , R̂i(r | s)(t) = ϵ2 + (1− ϵ2)ψis,r(t) Sϵ2 + (1− ϵ2) ∥∥ψis,·(t)∥∥1 . (28) where ϕis,s′(t) is the parameters in the posterior distribution of P
i at time t, ψis,r(t) is the parameters in the posterior distribution of Ri at time t. We also define the pseudo count N itk(s, a) of the stateaction pair (s, a) before the episode k for arm i as
N itk(s, a) = ∥∥ψis,·(tk)∥∥1 − ∥∥ψis,·(0)∥∥1 .
For notational simplicity, we use z = (s, a) ∈ S ×A and zt = (st, at) to denote the corresponding state-action pair. Then based on Lemma 7 we can decompose the R3 as follows,
R3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r P [r | b̂t, at, θk]v (b′, θk)− v(b̂t+1, θk) )]
= Eθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 [∑ r ( P (r | b̂t, at, θk)− P (r | b∗t , at, θ∗) ) v(b′, θk) ]] ≤ R03 +R13 +R23
where
R03 = HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥P ∗ − Pk∥1 ] ,
R13 = HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||b∗t − b̂t||1 ] ,
R23 = S 2HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥R∗ −Rk∥1 ] .
Note that the following results are all focused on one arm. Define P i∗ is the true transition function for arm i, P ik is the sampled transition function for arm i. We can extend the results on a arm to the aggregated large MDP based on Lemma 11.
Bounding R03. Since 0 ≤ v (b′, θk) ≤ H from our assumption , each term in the inner summation is bounded by ∑
s′∈S | ( P i∗ (s ′ | zt)− P ik (s′ | zt) ) |v (s′, θk)
≤H ∑ s′∈S ∣∣P i∗ (s′ | zt)− P ik (s′ | zt)∣∣ ≤H
∑ s′∈S ∣∣∣P i∗ (s′ | zt)− P̂ ik (s′ | zt)∣∣∣+H ∑ s′∈S ∣∣∣P ik (s′ | zt)− P̂ ik (s′ | zt)∣∣∣ . where P i∗ (s
′ | zt) is the true transition function, P ik (s′ | zt) is the sampled reward function and P̂ ik (s
′ | zt) is the posterior mean. The second inequality above in due to triangle inequality. Let Mik be the set of plausible MDPs in episode k with reward functionR (r | z) and transition function P (s′ | z) satisfying,∑
s′∈S ∣∣∣P (s′ | z)− P̂ ik (s′ | z)∣∣∣ ≤ βik(z), ∑ r∈R ∣∣∣R (r | z)− R̂ik (r | z)∣∣∣ ≤ βik(z), where βik(s, a) := √ 14S log(2NtkT )
max{1,Nitk (s,a)} is chosen conservatively (Auer et al., 2008) so thatMik con-
tains both P i∗ and P i k, R i ∗ and R i k with high probability. P i ∗ and R i ∗ are the true parameters as we defined in section 4.1. Note that βik(z) is the confidence set with δ = 1/tk. Recall the definition ofψ, we can define the pseudo count of state-action pair (s, a) as N itk(s, a) = ∥∥ψis,·(tk)∥∥1 − ∥∥ψis,·(0)∥∥1. Then we can obtain,∑
s′∈S ∣∣∣P i∗ (s′ | zt)− P̂ ik (s′ | zt)∣∣∣+ ∑ s′∈S ∣∣∣P ik (s′ | zt)− P̂ ik (s′ | zt)∣∣∣ ≤ 2βik (zt) + 2 ( I{P i∗ /∈Bk} + I{P ik /∈Bk} ) .
(29)
We assume the length of the last episode is the biggest. Note that even the assumption does not hold, we can enlarge the sum items as TKT−1 − τ1. This does not affect the order of our regret bound. With our assumption, because the all episode length is not bigger than the last episode, that is tk+1 − 1− (tk + τ1) ≤ TKT − τ1, then we can obtain,
KT∑ k=1 tk+1−1∑ t=tk+τ1 βik (zt) ≤ KT∑ k=1 TkT −τ1∑ t=1 βik (zt) . (30)
Note that ∑ s′∈S ∣∣∣P i∗ (s′ | zt)− P̂ ik (s′ | zt)∣∣∣ ≤ 2 is always true. And with our assumption τ1 ≤ T1+KT−1
2 , it is easy to show that when N i tk ≥ TkT − τ1, βik (zt) ≤ 2 holds. Then we can obtain,
KT∑ k=1 TkT −τ1∑ t=1 min{2, βik (zt)} ≤ KT∑ k=1 TkT −τ1∑ t=1 2I(N itk < TkT − τ1)
+ KT∑ k=1 TkT −τ1∑ t=1 I(N itk ≥ TkT − τ1)
√ 14S log (2NtkT )
max ( 1, N itk (zt)
) . (31)
Consider the first part in equation 31. Obviously, the maximum ofN itk is TkT −τ1. Because there are totally SA state-action pairs, therefore, the first part in equation equation 31 can be bounded as,∑KT k=1 ∑TkT −τ1 t=1 2I(N itk < TkT − τ1) ≤ 2(TkT − τ1)SA. Due to TkT = T1 +KT − 1 and Lemma 10, we get , 2(TkT − τ1)SA = 2(T1 +KT − τ1 − 1)SA = O( √ T ).
Consider the second part in 31. Denote the N it (s, a) is the count of (s, a) before time t(not including t). Due to we just consider the exploration phase in each episode, then N it (s, a) can be calculated as follows,
N it (s, a) = ∣∣{τ < t, τ ∈ [tk, tk + τ1], k ≤ k(t) : (siτ , aiτ) = (s, a)}∣∣ ,
where k(t) is the episode number where the time t is in.
In the second part in equation 31, when N itk ≥ TkT − τ1, based on our assumption τ1 ≤ T1+KT−1
2 , we can get,
τ1 ≤ T1 +KT − 1
2 ,
2τ1 ≤ T1 +KT − 1 = TkT . therefore, TkT − τ1 ≥ τ1. Because N itk ≥ TkT − τ1, then N i tk (s, a) ≥ τ1. For any t ∈ [tk, tk + τ1],we have N it (s, a) ≤ N itk(s, a) + τ1 ≤ 2N i tk (s, a).
Therefore N it (s, a) ≤ 2N itk(s, a). Next we can bound the confidence set when Nt(s, a) ≤ 2Ntk(s, a) as follows,
KT∑ k=1 TkT −τ1∑ t=1 βik (zt) ≤ KT∑ k=1 tk+1−1∑ t=tk
√ 14S log (2NtkT )
max ( 1, N itk (zt) ) ≤
KT∑ k=1 tk+1−1∑ t=tk
√ 14S log (2NT 2)
max ( 1, N itk (zt) ) =
T∑ t=1
√ 28S log (2NT 2)
max ( 1, N it (zt) ) ≤ √ 56S log(2NT )
T∑ t=1 1√ max ( 1, N it (zt) ) .
(32)
where the second inequality in equation 32 is due to tk ≤ T for all episodes and the first equality is due to N it (s, a) ≤ 2N itk(s, a).
Then similar to Ouyang et al. (2017), since N it (zt) is the count of visits to zt, we have T∑ t=1 1√ max ( 1, N it (zt) ) =∑ z T∑ t=1 I{zt=z}√ max ( 1, N it (z)
) = ∑ z I{NiT+1(z)>0} + NiT+1(z)−1∑ j=1 1√ j
≤ ∑ z ( I{NiT+1(z)>0} + 2 √ N iT+1(z) ) ≤ 3 ∑ z √ N iT+1(z).
Since ∑ z N i T+1(z) ≤ T , we have
3 ∑ z √ N iT+1(z) ≤ 3 √ SN ∑ z N iT+1(z) = 3 √ SNT . (33)
With equation 32 and equation 33 we get
2H KT∑ k=1 tk+1−1∑ t=tk βik (zt) ≤ 6 √ 56HS √ NT log(NT ) ≤ 48HS √ NT log(NT ).
Then we can bound the equation 30 as follows,
KT∑ k=1 tk+1−1∑ t=tk βik (zt) ≤ 24S √ NT log(NT ) + 2SA(T1 +KT − τ1 − 1). (34)
Choose the δ = 1/T in Lemma 12, and based by Lemma 13, we obtain that
P ( P ik /∈ Bk ) = P ( P i∗ /∈ Bk ) ≤ 1
15Tt6k .
Then we can obtain, 2Eθ∗ [ KT∑ k=1 Tk ( I{θ∗ /∈Bk} + I{θk /∈Bk} )] ≤ 4 15 ∞∑ k=1 t−6k ≤ 4 15 ∞∑ k=1 k−6 ≤ 1. (35)
Therefore we obtain
2HEθ∗ [ KT∑ k=1 Tk ( I{θ∗ /∈Bk} + I{θk /∈Bk} )] ≤ H. (36)
Therefore, we can obtain the bound for one arm as follows, Eθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s′∈S ( P i∗ (s ′ | zt)− P ik (s′ | zt) ) v (s′, θk) )] ≤ H + 4SNH(T1 +KT − τ1 − 1) + 48HS √ NT log(NT ).
(37)
Next we consider the state transition of all arms. Recall that the states of all arms at time t is st. Because every arm evolves independently, then the transition probability from state st to state st+1 is as follows,
P (st+1 | st, θ∗) = N∏ i=1 P i∗ ( sit+1 | sit ) ,
where P i∗ is the true transition functions of arm i. Based by the Lemma 11 and our assumption that all arms have the same state space S, we can obtain∑
st+1 |P (st+1 | st, θ∗)− P (st+1 | st, θk)| ≤ N∑ i ∥∥P i∗ (sit+1 | sit)− P ik (sit+1 | sit)∥∥1 ≤ N ∥∥P i∗ (sit+1 | sit)− P ik (sit+1 | sit)∥∥1 . (38)
Therefore, we can bound the R03 as follows: R03 ≤ NH + 4SN2H(T1 +KT − τ1 − 1) + 48SNH √ NT log(NT ). (39)
Bounding R13. Based on the Proposition 2, we know that∥∥∥b∗t − b̂t∥∥∥ 1 ≤ L1∥R∗ −Rk∥1 + L2 max s ∥P ∗(s, :)− Pk(s, :)∥2 .
Note that the elements in the true transition matrix P ∗ and the sampled matrix Pk is between the interval (0, 1). Then based on the facts about norm, we know that
max s ∥P ∗(s, :)− Pk(s, :)∥2 ≤ ∥P ∗ − Pk∥1 .
Therefore , we can bound the belief error at any time as follows:∥∥∥b∗t − b̂t∥∥∥ 1 ≤ L1∥R∗ −Rk∥1 + L2∥P ∗ − Pk∥1. (40)
Recall in the confidence for Mk, the error bound is the same for ∥R∗ −Rk∥1 and ∥P ∗ − Pk∥1, and based by the bound in equation 34 and equation 35, we can bound the R13 as follows:
R13 ≤ HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk (L1∥R∗ −Rk∥1 + L2∥P ∗ − Pk∥1) ]
≤ (L1 + L2N)HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk ( 2βik (zt) + 2 ( I{P∗ /∈Bk} + I{Pk /∈Bk} ))] ≤ 48(L1 + L2N)SH √ NT log(NT ) + (L1 + L2N)H
4(L1 + L2N)SNH(T1 +KT − τ1 − 1).
(41)
Bounding R23. Based on equation 34 and equation 35, we can bound R23 as follows,
R23 = S 2HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥R∗(· | s)−Rk(· | s)∥1 ]
≤ S2HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ( 2βik (zt) + 2 ( I{R∗ /∈Bk} + I{Rk /∈Bk} ))] ≤ HS2 + 4S3NH(T1 +KT − τ1 − 1) + 48HS3 √ NT log(NT ).
(42)
Combine the bound in equation 39, equation 41 and equation 42, we bound the term R3 as follows: R3 ≤ 48(L1 + L2N)SH √ NT log(NT ) + 4(L1 + L2N)SNH(T1 +KT − τ1 − 1)
+ (L1 + L2N)H +NH + 4SN 2H(T1 +KT − τ1 − 1) + 48SNH √ NT log(NT )
+HS2 + 4S3NH(T1 +KT − τ1 − 1) + 48HS3 √ NT log(NT )
= 48(L1 + L2N +N + S 2)SH √ NT log(NT ) + (L1 + L2N +N + S 2)H
+ 4(L1 + L2N +N 2 + S2)SNH(T1 +KT − τ1 − 1).
(43)
B.4 BOUND R4
Lemma 9. R4 satisfies the following bound R4 ≤ 48(L1 + L2N +N + S2)Srmax √ NT log(NT ) + (L1 + L2N +N + S 2)rmax
+ 4(L1 + L2N +N + S 2)SArmax(T1 +KT − | 1. What is the focus of the paper regarding online restless bandits with unknown parameters and unobservable states?
2. What are the strengths and weaknesses of the proposed algorithm TSEETC, particularly in terms of its theoretical understanding and notations?
3. Do you have any concerns regarding the regret bound dependency on S and N, and the comparison with existing works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper focuses on solving online restless bandits with unknown parameter and unobservable states by the proposed algorithm TSEETC. A Bayesian regret bound with
O
(
T
)
dependency is established, which matches the lower bound dependency on
T
and improves the existing
O
(
T
2
/
3
)
bound derived for a related problem setting by Zhou et al. Also, simulations are showing that TSEETC outperforms existing algorithms in regret as proof-of-concept.
Strengths And Weaknesses
Strengths
This paper is a solid work providing theoretical understanding for a well defined problem, which has characteristics from both bandit and POMDP and is novel with no existing work addressing the exact same setting.
The
O
(
T
)
dependency, matching the lower bound dependency on
T
, is a significant improvement compared to the existing bounds of
T
2
/
3
. However, I’m not fully convinced by this improved dependency and have concern on the regret’s dependency on S and N, the number of states and arms respectively. See more detailed discuss in weaknesses and concerns.
Weaknesses
Notations are somewhat sloppy: To name a few which cause me the most trouble while reading:
Even in the main theorem Theorem 1: the notation
A
comes out of nowhere, I assume it should be the number of arms
N
.
In the main algorithm Algorithm 2: (i) Line 4,
g
t
k
(
P
)
and
g
t
k
(
R
)
could be misleading. If following Lemma 1,
g
t
k
(
P
)
should refer to the posterior of
P
conditoned on the history up to
t
k
, however it could also mean
g
t
k
−
1
+
τ
1
(
P
)
, which is what I assume the auther is actually referring to. This two interpretation have drastic difference since it depends on whether the data from the exploitation phase is used to update the posterior or not. (ii) Line 12, it's no clear what are the obtained
r
¯
τ
1
and
b
¯
τ
1
, though for this case I can guess them from the context.
Some others in the main text like
M
∗
,
M
k
on page 9. Also I came across complete or repeated sentences in the appendix.
Though the paper is written in a well-organized way most of the time, notations coming out of the blue and not rigorous statements in the main algorithm make it confusing for ones who are trying to parse the algorithm and theorem to get some intuitions behind the math. Sloppy notations truly harm the clarity as well as the formality of this paper.
Exponential dependency on
S
: It feels the exponential dependency
S
N
appering in constant
C
1
is far from tight, given the markov chain associated with each arm is independent. To compare with, the regret by Zhou et al scales linearly with
M
, which is the number of hidden states in the common MC shared by all arms. In the restless bandit setting, the complexity should be of
M
independent MCs with S hidden states rather than one MC with
S
N
hidden states.
Other Concerns
Why
T
? More Intuition is needed. I’m not fully convinced by why TSEETC is able to improve regret from
T
2
/
3
by zhou, whose algorithm mostly resembles TSEETC, except for using the UCB estimator constructed with the spectral method for HMM. Based on comparing both algorithms and going through the proof sketch, what directly improves the bound is that TSEETC has a longer exploitation phase in each episode and thus there are only
T
episodes less than
T
2
/
3
by Zhou et al. Given both algorithms do not use the on-policy data in exploitation phase (by the way I assume it happens for TSEETC because the notation is not clear), it implies posterior sampling concentrates to the ground truth model parameter better than UCB in terms of sample efficiency. It seems kind of counterintuitive based on the understanding of TS v.s. UCB from classic bandit literature, or the bottleneck of Zhou et al is to the spectral estimator around which the UCB is constructed?
Bayesian regret. This concern relates to the previous one. A common understanding from the classic bandit literature is that UCB and TS based algorithms usually have regrets of the same order, and TS based algorithms have strictly worse regret bounds from a frequentist view. I’d like to know if it’s possible to have a UCB-based algorithm achieving \sqrt{T} regret.
Computational cost of posteriors. To compute the exact posterior, one has to exhaust all possible state transitions of length
τ
1
, which means a total number of passes exponential to
τ
1
, for
T
episodes. Though
τ
1
would be of a constant order in theory, does this impose a higher computational cost when realizing TSEETC than SEEU in practice?
Clarity, Quality, Novelty And Reproducibility
Clarity in writing could be improved. Quality and ovelty have been evaluated in Strength/Weakness/Concern in detail. Overall, this paper has sufficient novelty for the probelm it studies and the results it claims to get, which I may need more evidence/intuitions to be convinced. If the notation could be revised carefully throughout the paper, then the quality of presentation is good. I didn’t check the reproducibility of simulations but I’d like to believe the results are reproducible. |
ICLR | Title
ONLINE RESTLESS BANDITS WITH UNOBSERVED STATES
Abstract
We study the online restless bandit problem, where each arm evolves according to a Markov chain independently, and the reward of pulling an arm depends on both the current state of the corresponding Markov chain and the action. The agent (decision maker) does not know the transition kernels and reward functions, and cannot observe the states of arms even after pulling. The goal is to sequentially choose which arms to pull so as to maximize the expected cumulative rewards collected. In this paper, we propose TSEETC, a learning algorithm based on Thompson Sampling with Episodic Explore-Then-Commit. The algorithm proceeds in episodes of increasing length and each episode is divided into exploration and exploitation phases. In the exploration phase in each episode, action-reward samples are collected in a round-robin way and then used to update the posterior as a mixture of Dirichlet distributions. At the beginning of the exploitation phase, TSEETC generates a sample from the posterior distribution as true parameters. It then follows the optimal policy for the sampled model for the rest of the episode. We establish the Bayesian regret bound Õ( √ T ) for TSEETC, where T is the time horizon. This is the first bound that is close to the lower bound of restless bandits, especially in an unobserved state setting. We show through simulations that TSEETC outperforms existing algorithms in regret.
N/A
√ T ) for TSEETC, where T is the time
horizon. This is the first bound that is close to the lower bound of restless bandits, especially in an unobserved state setting. We show through simulations that TSEETC outperforms existing algorithms in regret.
1 INTRODUCTION
The restless multi-armed problem (RMAB) is a general setup to model many sequential decision making problems ranging from wireless communication (Tekin & Liu, 2011; Sheng et al., 2014), sensor/machine maintenance (Ahmad et al., 2009; Akbarzadeh & Mahajan, 2021) and healthcare (Mate et al., 2020; 2021). This problem considers one agent and N arms. Each arm i is modulated by a Markov chain M i with state transition function P i and reward function Ri. At each time, the agent decides which arm to pull. After the pulling, all arms undergo an action-dependent Markovian state transition. The goal is to decide which arm to pull to maximize the expected reward, i.e., E[ ∑T t=1 rt], where rt is the reward at time t and T is the time horizon.
In this paper, we consider the online restless bandit problem with unknown parameters (transition functions and reward functions) and unobserved states. Many works concentrate on learning unknown parameters (Liu et al., 2010; 2011; Ortner et al., 2012; Wang et al., 2020; Xiong et al., 2022a;b) while ignoring the possibility that the states are also unknown. The unobserved states assumption is common in real-world applications, such as cache access (Paria & Sinha, 2021) and recommendation system (Peng et al., 2020). In the cache access problem, the user can only get the perceived delay but cannot know whether the requested content is stored in the cache before or after the access. Moreover, in the recommender system, we do not know the user’s preference for the items. There are also some studies that consider the unobserved states. However, they often assume the parameters are known (Mate et al., 2020; Meshram et al., 2018; Akbarzadeh & Mahajan, 2021) and there is a lack of theoretical result (Peng et al., 2020; Hu et al., 2020). And the existing algorithms (Zhou et al., 2021; Jahromi et al., 2022) with theoretical guarantee do not match the lower regret bound of RMAB (Ortner et al., 2012).
One common way to handle the unknown parameters but with observed states is to use the optimism in the face of uncertainty (OFU) principle (Liu et al., 2010; Ortner et al., 2012; Wang et al., 2020). The regret bound in these works is too weak sometimes, because the baseline they consider, such
as pulling the fixed arms (Liu et al., 2010), is not optimal in RMAB problem. Ortner et al. (2012) derives the lower bound Õ( √ T ) for RMAB problem. However, it is not clear whether there is an efficient computational method to search out the optimistic model in the confidence region (Lakshmanan et al., 2015). Another way to estimate the unknown parameters is Thompson Sampling (TS) method (Jung & Tewari, 2019; Jung et al., 2019; Jahromi et al., 2022; Hong et al., 2022). TS algorithm does not need to solve all instances that lie within the confident sets as OFU-based algorithms (Ouyang et al., 2017). What’s more, empirical studies suggest that TS algorithms outperform OFUbased algorithms in bandit and Markov decision process (MDP) problems (Scott, 2010; Chapelle & Li, 2011; Osband & Van Roy, 2017).
Some studies assume that only the states of pulled arms are observable (Mate et al., 2020; Liu & Zhao, 2010; Wang et al., 2020; Jung & Tewari, 2019). They translate the partially observable Markov decision process (POMDP) problem into a fully observable MDP by regarding the state last observed and the time elapsed as a meta-state (Mate et al., 2020; Jung & Tewari, 2019), which is much simpler due to more observations about pulled arms. Mate et al. (2020), and Liu & Zhao (2010) derive the optimal index policy but they assume the known parameters. Restless-UCB in Wang et al. (2020) achieves the regret bound of Õ(T 2/3), which does not match the lower bound Õ( √ T ) regret, and also restricted to a specific Markov model. There are also some works that consider that the arm’s state is not visible even after pulling (Meshram et al., 2018; Akbarzadeh & Mahajan, 2021; Peng et al., 2020; Hu et al., 2020; Zhou et al., 2021; Yemini et al., 2021) and the classic POMDP setting (Jahromi et al., 2022). However, there are still some challenges unresolved. Firstly, Meshram et al. (2018) and Akbarzadeh & Mahajan (2021) study the RMAB problem with unobserved states but with known parameters. However, the true value of the parameters are often unavailable in practice. Secondly, the works study RMAB from a learning perspective, e.g., Peng et al. (2020); Hu et al. (2020) but there are no regret analysis. Thirdly, existing policies with regret bound Õ(T 2/3) (Zhou et al., 2021; Jahromi et al., 2022) often do not have a regret guarantee that scales as Õ( √ T ), which is the lower bound in RMAB problem (Ortner et al., 2012). Yemini et al. (2021) considers the arms are modulated by two unobserved states and with linear reward. This linear structure is quite a bit of side information that the decision maker can take advantage of for decision making and problem-dependent log(T ) is given.
To the best of our knowledge, there are no provably optimal policies that perform close to the offline optimum and match the lower bound in restless bandit, especially in unobserved states setting. The unobserved states bring much challenges to us. Firstly, we need to control estimation error about states, which itself is not directly observed. Secondly, the error depends on the model parameters in a complex way via Bayesian updating and the parameters are still unknown. Thirdly, since the state is not fully observable, the decision-maker cannot keep track of the number of visits to state-action pairs, a quantity that is crucial in the theoretical analysis. We design a learning algorithm TSEETC to estimate these unknown parameters, and benchmarked on a stronger oracle, we show that our algorithm achieves a tighter regret bound. In summary, we make the following contributions:
Problem formulation. We consider the online restless bandit problems with unobserved states and unknown parameters. Compared with Jahromi et al. (2022), our reward functions are unknown.
Algorithmic design. We propose TSEETC, a learning algorithm based on Thompson Sampling with Episodic Explore-Then-Commit. The whole learning horizon is divided into episodes of increasing length. Each episode is split into exploration and exploitation phases. In the exploration phase, to estimate the unknown parameters, we update the posterior distributions about unknown parameters as a mixture of Dirichlet distributions. For the unobserved states, we use the belief state to encode the historical information. In the exploitation phases, we sample the parameters from the posterior distribution and derive an optimal policy based on the sampled parameter. What’s more, we design the determined episode length in an increasing manner to control the total episode number, which is crucial to bound the regret caused by exploration.
Regret analysis. We consider a stronger oracle which solves POMDP based on our belief state. And we define the pseudo-count to store the state-action pairs. Under a Bayesian framework, we show that the expected regret of TSEETC accumulated up to time T is bounded by Õ( √ T ) , where
Õ hides logarithmic factors. This bound improves the existing results (Zhou et al., 2021; Jahromi et al., 2022).
Experiment results. We conduct the proof-of-concept experiments, and compare our policy with existing baseline algorithms. Our results show that TSEETC outperforms existing algorithms and achieve a near-optimal regret bound.
2 RELATED WORK
We review the related works in two main domains: learning algorithm for unknown parameters, and methods to identify unknown states.
Unknown parameters. Since the system parameters are unknown in advance, it is essential to study RMAB problems from a learning perspective. Generally speaking, these works can be divided into two categories: OFU (Ortner et al., 2012; Wang et al., 2020; Xiong et al., 2022a; Zhou et al., 2021; Xiong et al., 2022b) or TS based (Jung et al., 2019; Jung & Tewari, 2019; Jahromi et al., 2022; Hong et al., 2022). The algorithms based on OFU often construct confidence sets for the system parameters at each time, find the optimistic estimator that is associated with the maximum reward, and then select an action based on the optimistic estimator. However, these methods may not perform close to the offline optimum because the baseline policy they consider, such as pulling only one arm, is often a heuristic policy and not optimal. In this case, the regret bound O(log T ) (Liu et al., 2010) is less meaningful. Apart from these works, posterior sampling (Jung & Tewari, 2019; Jung et al., 2019) were used to solve this problem. A TS algorithm generally samples a set of MDP parameters randomly from the posterior distribution, then actions are selected based on the sampled model. Jung & Tewari (2019) and Jung et al. (2019) provide theoretical guarantee Õ( √ T ) in the Bayesian setting. TS algorithms are confirmed to outperform optimistic algorithms in bandit and MDP problems (Scott, 2010; Chapelle & Li, 2011; Osband & Van Roy, 2017).
Unknown states. There are some works that consider the states of the pulled arm are observed (Mate et al., 2020; Liu & Zhao, 2010; Wang et al., 2020; Jung & Tewari, 2019). Mate et al. (2020) and Liu & Zhao (2010) assumes the unobserved states but with known parameters. Wang et al. (2020) constructs an offline instance and give the regret bound Õ(T 2/3). Jung & Tewari (2019) considers the episodic RMAB problems and the regret bound Õ( √ T ) is guaranteed in the Bayesian setting. Some studies assume that the states are unobserved even after pulling. Akbarzadeh & Mahajan (2021) and Meshram et al. (2018) consider the RMAB problem with unknown states but known system parameters. And there is no regret guarantee. Peng et al. (2020) and Hu et al. (2020) consider the unknown parameters but there are also no any theoretical results. The most similar to our work is Zhou et al. (2021) and Jahromi et al. (2022). Zhou et al. (2021) considers that all arms are modulated by a common unobserved Markov Chain. They proposed the estimation method based on spectral method (Anandkumar et al., 2012) and learning algorithm based on upper confidence bound (UCB) strategy (Auer et al., 2002). They also give the regret bound Õ(T 2/3) and there is a gap between the lower bound Õ( √ T ) (Ortner et al., 2012). Jahromi et al. (2022) considers the POMDP setting and propose the pseudo counts to store the state-action pairs. Their learning algorithm is based on Ouyang et al. (2017) and the regret bound is also Õ(T 2/3). And their algorithm is not programmable due to the pseudo counts is conditioned on the true counts which is uncountable.
3 PROBLEM SETTING
Consider a restless bandit problem with one agent and N arms. Each arm i ∈ [N ] := {1, 2, . . . , N} is associated with an independent discrete–time Markov chainMi = (Si, P i), where Si is the state space and P i ∈ RSi×Si the transition functions. Let sit denote the state of arm i at time t and st = (s 1 t , s 2 t , . . . , s N t ) the state of all arms. Each arm i is also associated with a reward functions Ri ∈ RSi×R, where Ri (r | s) is the probability that the agent receives a reward r ∈ R when he pulls arm i in state s. We assume the state spaces Si and the reward set R are finite and known to the agent. The parameters P i and Ri, i ∈ [N ] are unknown, and the state st is also unobserved to the agent. For the sake of notational simplicity, we assume that all arms have the same state spaces S with size S. Our result can be generalized in a straightforward way to allow different state spaces. The whole game is divided into T time steps. The initial state si1 for each arm i ∈ [N ] is drawn from a distribution hi independently, which we assume to be known to the agent. At each time t, the agent chooses one arm at ∈ [N ] to pull and receives a reward rt ∈ R with probability Rat(rt | satt ). Note
that only the pulled arm has the reward feedback. His decision on which arm at to pull is based on the observed history Ht = [a1, r1, a2, r2 · · · , at−1, rt−1]. Note that the states of the arms are never observable, even after pulling. Each arm i makes a state transition independently according to the associated P i, whether it is pulled or not. This process continues until the end of the game. The goal of the agent is to maximize the total expected reward.
We use θ to denote the unknown P i and Ri for i ∈ [N ] collectively. Since the true states are unobservable, the agent maintains a belief state bit = [b i t(s, θ), s ∈ Si] ∈ ∆Si for each arm i, where
bit(s, θ) := P ( sit = s | Ht, θ ) ,
and ∆Si := { b ∈ RSi+ : ∑ s∈Si b(s) = 1 } is the probability simplex in RSi . Note that bit(s, θ) depends on the unknown model parameter θ, which itself has to be learned by the agent. We aggregate all arms as a whole Markov chainM and denote its transition matrix and reward function as P and R, respectively. For a given θ, the overall belief state bt = (b1t , b 2 t , · · · , bNt ) is a sufficient statistic for Ht−1 (Smallwood & Sondik, 1973), so the agent can base his decision at time t on bt only. Let ∆b := ∆S1 × · · · ×∆SN . A deterministic stationary policy π : ∆b → [N ] maps a belief state to an action. The long-term average reward of a policy π is defined as
Jπ(h, θ) := lim sup T→∞
1 T E [ T∑ t=1 rt ∣∣∣ h, θ] . (1) We use J(h, θ) = supπ J
π(h, θ) to denote the optimal long-term average reward. We assume J(h, θ) is independent of the initial distribution h as in Jahromi et al. (2022) and denoted it by J(θ). We make the following assumption. Assumption 1. The smallest element ϵ1 in the transition functions P i, i ∈ N is bigger than zero. Assumption 2. The smallest element ϵ2 in the reward functions Ri, i ∈ N is bigger than zero.
Assumption 1 and Assumption 2 are strong in general, but they help us bound the error of belief estimation (De Castro et al., 2017). Assumption 1 also makes the MDP weakly communicating (Bertsekas et al., 2011). For weakly communicating MDP, it is known that there exists a bounded function v(·, θ) : ∆b → R such that for all b ∈ ∆b (Bertsekas et al., 2011),
J(θ) + v(b, θ) = max a
{ r(b, a) +
∑ r P (r | b, a, θ)v (b′, θ)
} , (2)
where v is the relative value function, r(b, a) = ∑ s ∑ r b a(s, θ)Ra(r | s)r is the expected reward, b′ is the updated belief after obtaining the reward r, and P (r | b, a, θ) is the probability of observing r in the next step, conditioned on the current belief b and action a. The corresponding optimal policy is the maximizer of the right part in equation 2. Since the value function v(, θ) is finite, we can bound the span function sp(θ) := maxb v(b, θ) −minb v(b, θ) as Zhou et al. (2021). We show the details about this bound in Proposition 1 and denote the bound as H .
We consider the Bayesian regret. The parameters θ∗ is randomly generated from a known prior distribution Q at the beginning and then fixed but unknown to the agent. We measure the efficiency of a policy π by its regret, defined as the expected gap between the cumulative reward of an offline oracle and that of π, where the oracle is the optimal policy with the full knowledge of θ∗, but unknown states. The offline oracle is similar to Zhou et al. (2021), which is stronger than those considered in Azizzadenesheli et al. (2016) and Fiez et al. (2018). We focus on the Bayesian regret of policy π (Ouyang et al., 2017; Jung & Tewari, 2019) as follows,
RT := Eθ∗∼Q [ T∑ t=1 (J(θ∗)− rt) ] . (3)
The above expectation is with respect to the prior distribution about θ∗, the randomness in state transitions and the random reward.
4 THE TSEETC ALGORITHM
In section 4.1, we define the belief state and show how to update it with new observation. In section 4.2, we show how to update the posterior distributions under unknown states. In section 4.3, we show the details about our learning algorithm TSEETC.
4.1 BELIEF ENCODER FOR UNOBSERVED STATE
Here we focus on the belief update for arm i with true parameters θ∗. At time t, the belief for arm i in state s is bit(s, θ
∗). Then after the pulling of arm i, we obtain the observation rt. The belief bit(s ′, θ∗) can be update as follows:
bit+1(s ′, θ∗) =
∑ s b i t(s, θ
∗)Ri∗ (rt | s)P i∗(s′ | s)∑ s b i t(s, θ ∗)Ri∗ (rt | s) , (4)
where the P i∗(s ′ | s) is the probability of transitioning from state s at time t to state s′ andRi∗ (rt | s) is the probability of obtain reward rt under state s.
If the arm i is not pulled, we update its belief as follows:
bit+1(s ′, θ∗) = ∑ s bit(s, θ ∗)P i∗(s ′ | s). (5)
Then at each time, we can aggregate the belief of all arms as bt. Based on equation 2 , we can derive the optimal action at for current belief bt.
4.2 MIXTURE OF DIRICHLET DISTRIBUTION
In this section, we estimate the unknown P i and Ri based on Dirichlet distribution. The Dirichlet distribution is parameterized by a count vector, ϕ = (ϕ1, . . . , ϕk), where ϕi ≥ 0, such that the density of probability distribution is defined as f(p | ϕ) ∝ ∏k i=1 p ϕi−1 i (Ghavamzadeh et al., 2015).
Since the true states are unobserved, all state sequences should be considered, with some weight proportional to the likelihood of each state sequence (Ross et al., 2011). Denote the reward history collected from time t1 till t2 for arm i as rit1:t2 and similarly the states history is denoted as s i t1:t2 . And the belief state history is denoted as bit1:t2 . Then with these history information, the posterior distribution gt(P i) and gt(Ri) at time t can be updated as in Lemma 1. Lemma 1. Under the unobserved state setting and assuming transition function P i with prior g0 ( P i ) = f(P
i−ϵ11 1−ϵ1 | ϕ
i) , reward function Ri with prior g0 ( Ri ) = f(R
i−ϵ21 1−ϵ2 | ψ i), with the information ri0:t and b i 0:t , then the posterior distribution are as follows:
gt ( P i ) ∝ ∑ s̄it∈Sti g0 ( P i ) w(si0:t) ∏ s,s′ ( P i(s′ | s)− ϵ1 1− ϵ1 )N i s,s′(s̄ i t)+ϕ i s,s′−1, (6)
gt ( Ri ) ∝ ∑ s̄it∈Sti g0 ( Ri ) w(si0:t) ∏ s,r ( Ri(r | s)− ϵ2 1− ϵ2 )N i s,r(s̄ i t)+ψ i s,r−1. (7)
where w(si0:t) is the likelihood of state sequence s i 0:t and 1 is the vector with one in each position.
The element 1 can be different lengths in correspondence with the dimension of P and R. This procedure is summarized in Algorithm 1.
Algorithm 1 Posterior Update for Ri(s, ·) and P i(s, ·) 1: Input: the history length τ1, the state space Si, the belief history bi0:τ1 , the reward history r i 0:τ1 ,
the initial parameters ϕis,s′ , ψ i s,r, for s, s ′ ∈ Si, r ∈ R, 2: generate Sτ1i possible state sequences 3: calculate the weight w(j) = ∏τ1 t=1 b i t(s, θ), j ∈ S τ1 i 4: for j in 1, . . . ,Sτ1i do 5: count the occurence times of event (s, s′) and (s, r) as N is,s′ , N i s,r in sequence j 6: update ϕis,s′ ← ϕis,s′ +N is,s′ , ψis,r ← ψis,r +N is,r 7: aggregate the ϕis,s′ as ϕ(j), ψ i s,r as ψ(j) for all s, s
′ ∈ Si, r ∈ R 8: end for 9: update the mixture Dirichlet distribution gτ1(P i) ∝ ∑Sτ1i j=1 w(j)f( P i−ϵ11 1−ϵ1 | ϕ(j)),
gτ1(R i) ∝ ∑Sτ1i j=1 w(j)f( Ri−ϵ21 1−ϵ2 | ψ(j))
With Algorithm 1, we can update the posterior distribution about the unknown parameters and sample from the posterior distribution as true parameters. The belief estimation error can be bounded by the distance between the sampled parameters and the true values (Proposition 2 ). The theoretical guarantee about estimation errors about unknown parameters is provided in Lemma 2.
4.3 OUR ALGORITHM
Our algorithm, TSEETC, operates in episodes with different lengths. Each episode is split into exploration phase and exploitation phase. Denote the episode number is KT and the first time in each episode is denoted as tk. We use Tk to denote the length of episode k and it can be determined as: Tk = T1 + k − 1, where T1 = ⌈√ T+1 2 ⌉ . The length of exploration phase in each episode is fixed as τ1 which satisfies τ1KT = O( √ T ) and τ1 ≤ T1+KT−12 . With these notations, our whole algorithm is shown below.
Algorithm 2 Thompson Sampling with Episodic Explore-Then-Commit 1: Input: prior g0(P ),g0(R), initial belief b0, exploration length τ1, the first episode length T1 2: for episode k = 1, 2, . . . , do 3: start the first time of episode k, tk := t 4: generate R(tk) ∼ gtk−1+τ1(R) and P (tk) ∼ gtk−1+τ1(P ) 5: for t = tk, tk + 1, ..., tk + τ1 do 6: pull the arm i for τ1/N times in a round robin way 7: receive the reward rt 8: update the belief bit using R(tk), P (tk) based on equation 4 9: update the belief bjt , j ∈ N \ {i} using P (tk) based on equation 5 10: end for 11: for i = 1, 2, . . . , N do 12: input the obtained rt1:t1+τ1 , ..., rtk:tk+τ1 , bt1:t1+τ1 , ..., btk:tk+τ1 to Algorithm 1 to update the posterior distribution gtk+τ1(P ), gtk+τ1(R) 13: end for 14: generate R(tk + τ1) ∼ gtk+τ1(P ), P (tk + τ1) ∼ gtk+τ1(R) 15: for i in 0, 1, . . . , N do 16: re-update the belief bit from time 0 to tk + τ1 based on R(tk + τ1) and P (tk + τ1) 17: end for 18: compute π∗k(·) = Oracle (·, R(tk + τ1), P (tk + τ1)) 19: for t = tk + τ1 + 1, · · · , tk+1 − 1 do 20: apply action at = π∗k (bt) 21: observe new reward rt+1 22: update the belief bt of all arms based equation 4, equation 5 23: end for 24: end for
In episode k, for the exploration phase, we first sampled the θtk from the distribution gtk−1+τ1(P ) and gtk−1+τ1(R). We pull each arm for τ1/N times in a round robin way. For the pulled arm, we update its belief based on equation 4 using θtk . For the arms that are not pulled, we update its belief based on equation 5 using θtk . The reward and belief history of each arm are input into Algorithm 1 to update the posterior distribution after the exploration phase. Then we sample the new θtk+τ1 from the posterior distribution, and re-calibrate the belief bt based on the most recent estimated θtk+τ1 . Next we enter into the exploitation phase . Firstly we derive the optimal policy πk for the sampled parameter θtk+τ1 . Then we use policy πk for the rest of the episode k.
We control the increasing of episode length in a deterministic manner. Specially, the length for episode k is just one more than the last episode k. In such a deterministic increasing manner, the episode number KT is bounded by O( √ T ) as in Lemma 10. Then the regret caused by the
exploration phases can be bound by O( √ T ), which is an crucial part in Theorem 1.
In TSEETC, for the unknown states, we propose the belief state to estimate the true states. What’s more, under the unobserved state setting, we consider all possible state transitions and update the
posterior distribution of unknown parameters as mixture of each combined distribution, in which each occurence is summed with different weight. Remark 1. We use an Oracle to derive the optimal policy for the sampled parameters in Algorithm 2. The Oracle can be the Bellman equation for POMDP as we introduced in equation 2, or the approximation methods (Pineau et al., 2003; Silver & Veness, 2010) , etc. The approximation error is discussed in Remark 3.
5 PERFORMANCE ANALYSIS
In section 5.1, we show our theoretical results and some discussions. In section 5.2, we provide a proof sketch and the detailed proof is in Appendix B.
5.1 REGRET BOUND AND DISCUSSIONS
Theorem 1. Suppose Assumptions 1,2 hold and the Oracle returns the optimal policy in each episode. The Bayesian regret of our algorithm satisfies
RT ≤ 48C1C2S √ NT log(NT ) + (τ1∆R+H + 4C1C2SN) √ T + C1C2,
where C1 = L1 + L2N +N2 + S2, C2 = rmax +H are constants independent with time horizon T , L1 =
4(1−ϵ1)2 Nϵ21ϵ2 , L2 = 4(1−ϵ1)2 ϵ31 , ϵ1 and ϵ2 are the minimum elements of the functions P ∗ and R∗,
respectively. τ1 is the fixed exploration length in each episode, ∆R is the biggest gap of the reward obtained at each two different time, H is the bounded span, rmax is the maximum reward obtain each time, N is the number of arms and S is the state size for each arm. Remark 2. The Theorem 1 shows that the regret of TSEETC is upper bound by Õ( √ T ). This is the first bound that matches the lower bound in restless bandit problem (Ortner et al., 2012) in such unobserved state setting. Although TSEETC looks similar to explore-then-commit (Lattimore & Szepesvári, 2020), a key novelty of TSEETC lies in using the approach of posterior sampling to update the posterior distribution of unknown parameters as the mixture of each combined distribution. Our algorithm balances exploration and exploitation in a deterministic-episode manner and ensures the episode length grows at a linear rate, which guarantees that the total episode number is bounded by O( √ T ). Therefore the total regret caused by exploration is well controlled by O( √ T ) and this is better than the bound O(T 2/3) in Zhou et al. (2021). What’s more, in the exploitation phase, our regret bound Õ( √ T ) is also better than Õ(T 2/3) (Zhou et al., 2021). This shows our posterior sampling based method is superior to UCB based solution (Osband & Van Roy, 2017). In Jahromi et al. (2022), their pseudo count of state-action pair is always smaller than the true counts with some probability at any time. However, in our algorithm, the sampled parameter is more concentrated on true values with the posterior update. Therefore, our pseudo count (defined in equation 13) based on belief approximates the true counts more closely, which helps us obtain a tighter bound.
5.2 PROOF SKETCH
In our algorithm, the total regret can be decomposed as follows:
RT = Eθ∗ [ kT∑ k=1 tk+τ1∑ tk J(θ∗)− rt ] ︸ ︷︷ ︸
Regret (A)
+Eθ∗ [ kT∑ k=1 tk+1−1∑ tk+τ1+1 J(θ∗)− rt ] ︸ ︷︷ ︸
Regret (B)
. (8)
Bounding Regret (A). The Regret (A) is the regret caused in the exploration phase of each episode. This term can be simply bounded as follows:
Regret (A) ≤ Eθ∗ [ kT∑ k=1 τ1∆R ] ≤ τ1∆RkT (9)
where ∆R = rmax − rmin is the biggest gap of the reward received at each two different times. The regret in equation 9 is related with the episode number kT , which can be bounded asO( √ T ) in Lemma 10.
Bounding Regret (B). Next we bound Regret(B) in the exploitation phase. Define b̂t is the belief updated with parameter θk and b∗t represents the belief with θ
∗. During episode k, based on equation 2 for the sampled parameter θk and that at = π∗(b̂t), we can write:
J (θk) + v(b̂t, θk) = r(b̂t, at) + ∑ r P (r | b̂t, at, θk)v(b′, θk). (10)
With this equation, we proceed by decomposing the regret as:
Regret(B) = R1 +R2 +R3 +R4 (11)
where each term is defined as follows:
R1 = Eθ∗ kT∑ k=1 [(Tk − τ1 − 1) (J(θ∗)− J(θk))] ,
R2 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( v(b̂t+1, θk)− v(b̂t, θk) )] ,
R3 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 (∑ r P [ r | b̂t, at, θk ] v(b′, θk)− v(b̂t+1, θk) )] ,
R4 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( r(b̂t, at)− r(b∗t , at) )] .
Bounding R1. One key property of Posterior Sampling algorithms is that for given the historyHtk , the true parameter θ∗ and sampled θk are identically distributed at the time tk as stated in Lemma 13. Due to the length Tk determined and independent with θk, then R1 is zero thanks to this key property.
Bounding R2. The regret R2 is the telescopic sum of value function and can be bounded as R2 ≤ HKT . It solely depends on the episode number and the upper boundH of span function. As a result, R2 reduce to a finite bound over the number of episodes kT , which can be bounded in Lemma 10.
BoundingR3 andR4. The regret termsR3 andR4 is related with estimation error about θ. Thus we should bound the parameters’ error especially in our unobserved state setting. Recall the definition of ϕ, ψ, we can define the posterior mean of P̂ i(s′ | s) and R̂i(r | s) for arm i at time t as follows:
P̂ i(s′ | s)(t) = ϵ1 + (1− ϵ1)ϕis,s′(t) Sϵ1 + (1− ϵ1) ∥∥ϕis,·(t)∥∥1 , R̂i(r | s)(t) = ϵ2 + (1− ϵ2)ψis,r(t) Sϵ2 + (1− ϵ2) ∥∥ψis,·(t)∥∥1 . (12) We also define the pesudo count of the state-action pair (s, a) before the episode k as
N itk(s, a) = ∥∥ψis,·(tk)∥∥1 − ∥∥ψis,·(0)∥∥1 (13)
where ψis,·(tk) represents the count of state-action z = (s, a) pair before the episode k. Let Mik be the set of plausible MDPs in episode k with reward function R (r | z) and transition function P (s′ | z) satisfying,∑
s′∈S ∣∣∣P (s′ | z)− P̂ ik (s′ | z)∣∣∣ ≤ βk(z), ∑ r∈R ∣∣∣R (r | z)− R̂ik (r | z)∣∣∣ ≤ βk(z), (14) where βik(s, a) := √ 14S log(2NtkT )
max{1,Nitk (s,a)} is chosen conservatively (Auer et al., 2008) so thatMik con-
tains both P i∗ and P i k, R i ∗ and R i k with high probability. P i ∗ and R i ∗ are the true parameters as we defined in section 4.1. Specially, for the unobserved state setting, the belief error under different parameters is upper bounded by the gap between the estimators as in Proposition 2. Then the core of the proofs lies in deriving a high-probability confidence set with our pesudo counts and show that the estimated error accumulated to T for each arm is bounded by √ T . Then with the error bound for each arm, we can derive the final error bound about the MDP aggregated by all arms as stated in Lemma 2 .
Lemma 2. (estimation errors) The total estimation error about transition functions accumulated by all exploitation phases satisfies the following bound
Eθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥P ∗ − Pk∥1 ] ≤ 48SN √ NT log(NT ) + 4SN2 √ T +N. (15)
The Lemma 2 shows that the accumulated error is bounded by O( √ T ), which is crucial to obtain the final bound as the observed-states setting (Ortner et al., 2012; Jung & Tewari, 2019). With C1 = L1 + L2N + N
2 + S2, We show the final bound about R3, R4 and the detailed proof in Appendix B.3,B.4. Lemma 3. R3 satisfies the following bound
R3 ≤ 48C1SH √ NT logNT + 4C1SNH √ T + C1H.
Lemma 4. R4 satisfies the following bound R4 ≤ 48C1Srmax √ NT log(NT ) + 4C1SNrmax √ T + C1rmax.
6 NUMERICAL EXPERIMENTS
In this section, we present proof-of-concept experiments and approximately implement TSEETC . We consider two arms and there are two hidden states for each arm. We pull just one arm each time. The learning horizon T = 50000, and each algorithm runs 100 iterations. The transition functions and reward functions for all arms are the same. We initialize the algorithm with uninformed Dirichlet prior on the unknown parameters. We compare our algorithm with simple heuristics ϵ-greedy (Lattimore & Szepesvári, 2020) (ϵ = 0.01), and Sliding-Window UCB (Garivier & Moulines, 2011) with specified window size, RUCB (Liu et al., 2010), Q-learning (Hu et al., 2020) and SEEU (Zhou et al., 2021). The results are shown in Figure 1. We can find that TSEETC has the minimum regret among these algorithms.
In Figure 2, we plot the cumulative regret versus T of the six algorithms in log-log scale. We observe that the slopes of all algorithms except for our TSEETC and SEEU are close to one, suggesting that they incur linear regrets. What is more, the slope of TSEETC is close to 0.5, which is better than SEEU. This is consistent with our theoretical result.
7 CONCLUSION
In this paper, we consider the restless bandit with unknown states and unknown dynamics. We propose the TSEETC algorithm to estimate these unknown parameters and derive the optimal policy. We also establish the Bayesian regret of our algorithm as Õ( √ T ) which is the first bound that matches the lower bound especially in restless bandit problems with unobserved states . Numerical results validate that the TSEETC algorithm outperforms other learning algorithms in regret. A related open question is whether our method can be applied to the setting where the transition functions are action dependent. We leave it for future research.
A TABLE OF NOTATIONS
Notation Description T The length of horizon KT The episode number of time T Tk The episode length of episode k τ1 The fixed exploration length in each episode P i The transition functions for arm i Ri The reward function for arm i Pk The sampled transition function for aggregated MDP Rk The sampled reward function for aggregated MDP rt The reward obtained at time t
bit(s, θ) The belief state for being in state s at time t for arm i with parameter θ b̂t The belief of all arms at time t with parameter θk b∗t The belief of all arms at time t with parameter θ ∗
at The action at time t J(θk) The optimal long term average reward with parameter θk rmax The maximum reward obtained each time rmin The minimum reward obtained each time ∆R The biggest gap of the obtained reward
B PROOF OF THEOREM 1
Recall that our goal is to minimize the regret : RT := Eθ∗ [ T∑ t=1 (J(θ∗)− rt) ] . (16)
rt depends on the state st and at. Thus rt can be written as r(st, at). Due to Eθ∗ [r (st, at) | Ht−1] = r(b∗t , at) for any t, we have,
RT := Eθ∗ [ T∑ t=1 (J(θ∗)− r(b∗t , at)) ] . (17)
In our algorithm, each episode is split into the exploration and exploitation phase then we can rewrite the regret as:
RT = Eθ∗ [ kT∑ k=1 tk+τ1∑ tk (J(θ∗)− r (b∗t , at)) + kT∑ k=1 tk+1−1∑ tk+τ1+1 (J(θ∗)− r (b∗t , at)) ] , (18)
where τ1 is the exploration length for each episode. τ1 is a constant. tk is the start time of episode k. Define the first part as Regret (A) which is caused by the exploration operations. The another part Regret (B) is as follows.
Regret (A) = Eθ∗ [ kT∑ k=1 tk+τ1∑ tk (J(θ∗)− r (b∗t , at)) ] ,
Regret (B) = Eθ∗ [ kT∑ k=1 tk+1−1∑ tk+τ1+1 (J(θ∗)− r (b∗t , at)) ] .
Recall that the reward set isR and we define the maximum reward gap inR as ∆R = rmax−rmin. Then we get: J(θ∗)− r (b∗t , at) ≤ ∆R. Then Regret (A) can be simply upper bounded as follows:
Regret (A) ≤ Eθ∗ [ kT∑ k=1 τ1∆R ] ≤ τ1∆RkT .
Regret (A) is related with the episode number kT obviously, which is bounded in Lemma 10. Next we should bound the term Regret (B).
During the episode k, based on equation 2, we get: J (θk) + v(b̂t, θk) = r(b̂t, at) + ∑ r P (r | b̂t, at, θk)v (b′, θk) , (19)
where J (θk) is the optimal long-term average reward when the system parameter is θk, b̂t is the belief at time t updated with parameter θk, r(b̂t, at) is the expected reward we can get when the action at is taken for the current belief b̂t, b′ is the updated belief based on equation 4 with parameter θk when the reward r is received.
Using this equation, we proceed by decomposing the regret as:
Regret(B) = R1 +R2 +R3 +R4, (20)
where
R1 = Eθ∗ kT∑ k=1 [(Tk − τ1 − 1) (J(θ∗)− J(θk))] ,
R2 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( v(b̂t+1, θk)− v(b̂t, θk) )] ,
R3 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 (∑ r P (r | b̂t, at, θk)v(b′, θk)− v(b̂t+1, θk) )] ,
R4 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( r(b̂t, at)− r(b∗t , at) )] .
Next we bound the four parts one by one.
B.1 BOUND R1
Lemma 5. R1 satisfies that R1 = 0.
Proof. Recall that:
R1 = Eθ∗ kT∑ k=1 [(Tk − τ1 − 1) (J(θ∗)− J(θk)] .
For each episode, Tk is determined and is independent with θk. Based on Lemma 13, we know that,
Eθ∗ [J(θ∗)] = Eθ∗ [J(θk)].
therefore, the part R1 is 0.
B.2 BOUND R2
Lemma 6. R2 satisfies the following bound
R2 ≤ HKT , where KT is the total number of episodes until time T .
Proof. Recall that R2 is the telescoping sum of value function at time t+ 1 and t.
R2 = Eθ∗ kT∑ k=1
[ tk+1−1∑
t=tk+τ1+1
[ v(b̂t+1, θk)− v(b̂t, θk) ]] . (21)
We consider the whole sum in episode k, then the R2 can be rewrite as:
R2 = Eθ∗ kT∑ k=1 [ v(b̂tk+1 , θk)− v(b̂tk+τ1+1, θk) ] .
Due to the span of v(b, θ) is bounded by H as in proposition 1 , then we can obtain the final bound,
R2 ≤ HKT .
B.3 BOUND R3
In this section, we first rewrite the R3 in section B.3.1. In section B.3.2, we show the details about how to bound R3.
B.3.1 REWRITE R3
Lemma 7. (Rewrite R3 ) The regret R3 can be bounded as follows: R3 ≤ HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||P ∗ − Pk||1 ] +HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||b∗t − b̂t||1 ]
+ S2HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥R∗ −Rk∥1 ] ,
where Pk is the sampled transition functions in episode k, Rk is the sampled reward functions in episode k, b∗t is the belief at time t updated with true P
∗ and R∗, b̂t is the belief at time t updated with sampled Pk, Rk.
Proof. The most part is similar to Jahromi et al. (2022), except that we should handle the unknown reward functions.
Recall that R3 = Eθ∗ ∑kT k=1 [∑tk+1−1 t=tk+τ1+1 (∑ r P (r | b̂t, at, θk)v (b′, θk)− v(b̂t+1, θk) )] .
Recall that Ht is the history of actions and observations prior to action at. Conditioned on Ht, θ∗ and θk, the only random variable in b̂t+1 is rt+1, then we can get,
Eθ∗ [ v(b̂t+1, θk) | Ht, θk ] = ∑ r∈R v (b′, θk)P (r | b∗t , at, θ∗), (22)
where P (r | b∗t , at, θ∗) is the probability of getting reward r given b∗t , at, θ∗. By the law of probability, P (r | b∗t , at, θ∗) can be written as follows,
P (r | b∗t , at, θ∗) = ∑ s′ R∗ (r | s′)P (st+1 = s′ | Ht, θ∗)
= ∑ s′ R∗ (r | s′) ∑ s P ∗ (st+1 = s ′ | st = s,Ht, at, θ∗)P (st = s | Ht, θ∗)
= ∑ s ∑ s′ b∗t (s)P ∗ (s′ | s)R∗ (r | s′) ,
(23) where P ∗ is the transition functions for the MDP aggregated by all arms, R∗ is the reward function for the aggregated MDP. Therefore, we can rewrite the R3 as follows,
R3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r∈R (P (r | b̂t, at, θk)− P (r | b∗t , at, θ∗)v (b′, θk) )] .
Based on equation 23, we get R3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)R ∗ (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )]
= Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )] + Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )] − Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)R ∗ (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )] .
(24)
where Rk is the sampled reward function for aggregated MDP, Pk is the sampled transition function for aggregated MDP.
Define R′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) [∑ s b̂t(s)Pk(s ′ | s)− ∑ s b∗t (s)P ∗(s′ | s) ])] ,
R′′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk) [Rk (r | s′)−R∗ (r | s′)] ∑ s b∗t (s)P ∗(s′ | s) )] .
Bounding R′3. The part R′3 can be bounded as Jahromi et al. (2022). R′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) [∑ s b̂t(s)Pk(s ′ | s)− ∑ s b∗t (s)P ∗(s′ | s)
])] = R′3(0) +R ′ 3(1)
where R′3(0) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)Pk(s ′ | s) )]
R′3(1) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )]
For R′3(0), because ∑ r Rk (r | s′) = 1, ∑ s′ Pk(s ′ | s) = 1,v (b′, θk) ≤ H , we have
R′3(0) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)Pk(s ′ | s) )]
= Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s (b̂t(s)− b∗t (s)Pk(s′ | s) )]
≤ Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s |b̂t(s)− b∗t (s)|Pk(s′ | s) )]
≤ HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s |b̂t(s)− b∗t (s)| )]
= HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∥∥∥b̂t(s)− b∗t (s)∥∥∥ 1 )] ,
where the first inequality is due to b̂t(s) − b∗t (s) ≤ |b̂t(s) − b∗t (s)| and the second inequality is because ∑ r Rk (r | s′) = 1, ∑ s′ Pk(s
′ | s) = 1, v (b′, θk) ≤ H . For the first term inR′3(1) , note that conditioned onHt, θ∗, the distribution of st is b∗t . Furthermore, at is measurable with respect to the sigma algebra generated byHt, θk since at = π∗(b̂t, θk). Thus, we have
Eθ∗ [ v (b′, θk)
∑ s P ∗ (s′ | s) b∗(s) | Ht, θk
] = v (b′, θk)Eθ∗ [P ∗ (s′ | s) | Ht, θk] . (25)
Eθ∗ [ v (b′, θk)
∑ s Pk (s ′ | s) b∗(s) | Ht, θk
] = v (b′, θk)Eθ∗ [Pk (s′ | s) | Ht, θk] . (26)
Substitute equation 25, equation 26 into R′3(1), we have
R′3(1) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) (Pk(s′ | s)− P ∗(s′ | s)) )]
≤ Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) |Pk(s′ | s)− P ∗(s′ | s)| )]
≤ HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s′ |Pk(s′ | s)− P ∗(s′ | s)| )]
≤ HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∥Pk − P ∗∥1) ] ,
where the first inequality is because Pk(s′ | s)− P ∗(s′ | s) ≤ |Pk(s′ | s)− P ∗(s′ | s)|, the second inequality is due to v (b′, θk) ≤ H and ∑ r Rk (r | s′) = 1.
Therefore we obtain the final results,
R′3 ≤ HE [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||P ∗ − Pk||1 ] +HE [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||b∗t − b̂t||1 ] .
Bounding R′′3 . For part R′′3 , note that for any fixed s′, ∑ s b ∗ t (s)P
∗(s′ | s) ≤ S, therefore we can bound R′′3 as follows,
R′′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk) [Rk (r | s′)−R∗ (r | s′)] ∑ s b∗t (s)P ∗(s′ | s) )]
≤ SHEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s′ ∑ r [Rk (r | s′)−R∗ (r | s′)] )]
≤ SHEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 S ∥Rk −R∗∥1 ]
≤ S2HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥Rk −R∗∥1 ] ,
(27) where the first inequality is due to v (b′, θk) ≤ H and ∑ s b ∗ t (s)P
∗(s′ | s) ≤ S , the second inequality is due to for any fixed s′, ∑ r [Rk (r | s′)−R∗ (r | s′)] ≤ ∥Rk −R∗∥1.
B.3.2 BOUND R3
Lemma 8. R3 satisfies the following bound R3 ≤ 48(L1 + L2N +N + S2)SH √ NT log(NT ) + (L1 + L2N +N + S 2)H
+ 4(L1 + L2N +N 2 + S2)SNH(T1 +KT − τ1 − 1).
Proof. Recall that the R3 is as follows:
R3 = Eθ∗ kT∑ k=1
[ tk+1−1∑
t=tk+τ1+1
(∑ r P [r | b̂t, at, θk]v (b′, θk)− v(b̂t+1, θk) )] .
This regret terms are dealing with the model estimation errors. That is to say, they depend on the on-policy error between the sampled transition functions and the true transition functions, the sampled reward functions and the true reward functions. Thus we should bound the parameters’ error especially in our unobserved state setting. Based on the parameters in our Dirichlet distribution, we can define the empirical estimation of reward function and transition functions for arm i as follows:
P̂ i(s′ | s)(t) = ϵ1 + (1− ϵ1)ϕis,s′(t) Sϵ1 + (1− ϵ1) ∥∥ϕis,·(t)∥∥1 , R̂i(r | s)(t) = ϵ2 + (1− ϵ2)ψis,r(t) Sϵ2 + (1− ϵ2) ∥∥ψis,·(t)∥∥1 . (28) where ϕis,s′(t) is the parameters in the posterior distribution of P
i at time t, ψis,r(t) is the parameters in the posterior distribution of Ri at time t. We also define the pseudo count N itk(s, a) of the stateaction pair (s, a) before the episode k for arm i as
N itk(s, a) = ∥∥ψis,·(tk)∥∥1 − ∥∥ψis,·(0)∥∥1 .
For notational simplicity, we use z = (s, a) ∈ S ×A and zt = (st, at) to denote the corresponding state-action pair. Then based on Lemma 7 we can decompose the R3 as follows,
R3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r P [r | b̂t, at, θk]v (b′, θk)− v(b̂t+1, θk) )]
= Eθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 [∑ r ( P (r | b̂t, at, θk)− P (r | b∗t , at, θ∗) ) v(b′, θk) ]] ≤ R03 +R13 +R23
where
R03 = HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥P ∗ − Pk∥1 ] ,
R13 = HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||b∗t − b̂t||1 ] ,
R23 = S 2HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥R∗ −Rk∥1 ] .
Note that the following results are all focused on one arm. Define P i∗ is the true transition function for arm i, P ik is the sampled transition function for arm i. We can extend the results on a arm to the aggregated large MDP based on Lemma 11.
Bounding R03. Since 0 ≤ v (b′, θk) ≤ H from our assumption , each term in the inner summation is bounded by ∑
s′∈S | ( P i∗ (s ′ | zt)− P ik (s′ | zt) ) |v (s′, θk)
≤H ∑ s′∈S ∣∣P i∗ (s′ | zt)− P ik (s′ | zt)∣∣ ≤H
∑ s′∈S ∣∣∣P i∗ (s′ | zt)− P̂ ik (s′ | zt)∣∣∣+H ∑ s′∈S ∣∣∣P ik (s′ | zt)− P̂ ik (s′ | zt)∣∣∣ . where P i∗ (s
′ | zt) is the true transition function, P ik (s′ | zt) is the sampled reward function and P̂ ik (s
′ | zt) is the posterior mean. The second inequality above in due to triangle inequality. Let Mik be the set of plausible MDPs in episode k with reward functionR (r | z) and transition function P (s′ | z) satisfying,∑
s′∈S ∣∣∣P (s′ | z)− P̂ ik (s′ | z)∣∣∣ ≤ βik(z), ∑ r∈R ∣∣∣R (r | z)− R̂ik (r | z)∣∣∣ ≤ βik(z), where βik(s, a) := √ 14S log(2NtkT )
max{1,Nitk (s,a)} is chosen conservatively (Auer et al., 2008) so thatMik con-
tains both P i∗ and P i k, R i ∗ and R i k with high probability. P i ∗ and R i ∗ are the true parameters as we defined in section 4.1. Note that βik(z) is the confidence set with δ = 1/tk. Recall the definition ofψ, we can define the pseudo count of state-action pair (s, a) as N itk(s, a) = ∥∥ψis,·(tk)∥∥1 − ∥∥ψis,·(0)∥∥1. Then we can obtain,∑
s′∈S ∣∣∣P i∗ (s′ | zt)− P̂ ik (s′ | zt)∣∣∣+ ∑ s′∈S ∣∣∣P ik (s′ | zt)− P̂ ik (s′ | zt)∣∣∣ ≤ 2βik (zt) + 2 ( I{P i∗ /∈Bk} + I{P ik /∈Bk} ) .
(29)
We assume the length of the last episode is the biggest. Note that even the assumption does not hold, we can enlarge the sum items as TKT−1 − τ1. This does not affect the order of our regret bound. With our assumption, because the all episode length is not bigger than the last episode, that is tk+1 − 1− (tk + τ1) ≤ TKT − τ1, then we can obtain,
KT∑ k=1 tk+1−1∑ t=tk+τ1 βik (zt) ≤ KT∑ k=1 TkT −τ1∑ t=1 βik (zt) . (30)
Note that ∑ s′∈S ∣∣∣P i∗ (s′ | zt)− P̂ ik (s′ | zt)∣∣∣ ≤ 2 is always true. And with our assumption τ1 ≤ T1+KT−1
2 , it is easy to show that when N i tk ≥ TkT − τ1, βik (zt) ≤ 2 holds. Then we can obtain,
KT∑ k=1 TkT −τ1∑ t=1 min{2, βik (zt)} ≤ KT∑ k=1 TkT −τ1∑ t=1 2I(N itk < TkT − τ1)
+ KT∑ k=1 TkT −τ1∑ t=1 I(N itk ≥ TkT − τ1)
√ 14S log (2NtkT )
max ( 1, N itk (zt)
) . (31)
Consider the first part in equation 31. Obviously, the maximum ofN itk is TkT −τ1. Because there are totally SA state-action pairs, therefore, the first part in equation equation 31 can be bounded as,∑KT k=1 ∑TkT −τ1 t=1 2I(N itk < TkT − τ1) ≤ 2(TkT − τ1)SA. Due to TkT = T1 +KT − 1 and Lemma 10, we get , 2(TkT − τ1)SA = 2(T1 +KT − τ1 − 1)SA = O( √ T ).
Consider the second part in 31. Denote the N it (s, a) is the count of (s, a) before time t(not including t). Due to we just consider the exploration phase in each episode, then N it (s, a) can be calculated as follows,
N it (s, a) = ∣∣{τ < t, τ ∈ [tk, tk + τ1], k ≤ k(t) : (siτ , aiτ) = (s, a)}∣∣ ,
where k(t) is the episode number where the time t is in.
In the second part in equation 31, when N itk ≥ TkT − τ1, based on our assumption τ1 ≤ T1+KT−1
2 , we can get,
τ1 ≤ T1 +KT − 1
2 ,
2τ1 ≤ T1 +KT − 1 = TkT . therefore, TkT − τ1 ≥ τ1. Because N itk ≥ TkT − τ1, then N i tk (s, a) ≥ τ1. For any t ∈ [tk, tk + τ1],we have N it (s, a) ≤ N itk(s, a) + τ1 ≤ 2N i tk (s, a).
Therefore N it (s, a) ≤ 2N itk(s, a). Next we can bound the confidence set when Nt(s, a) ≤ 2Ntk(s, a) as follows,
KT∑ k=1 TkT −τ1∑ t=1 βik (zt) ≤ KT∑ k=1 tk+1−1∑ t=tk
√ 14S log (2NtkT )
max ( 1, N itk (zt) ) ≤
KT∑ k=1 tk+1−1∑ t=tk
√ 14S log (2NT 2)
max ( 1, N itk (zt) ) =
T∑ t=1
√ 28S log (2NT 2)
max ( 1, N it (zt) ) ≤ √ 56S log(2NT )
T∑ t=1 1√ max ( 1, N it (zt) ) .
(32)
where the second inequality in equation 32 is due to tk ≤ T for all episodes and the first equality is due to N it (s, a) ≤ 2N itk(s, a).
Then similar to Ouyang et al. (2017), since N it (zt) is the count of visits to zt, we have T∑ t=1 1√ max ( 1, N it (zt) ) =∑ z T∑ t=1 I{zt=z}√ max ( 1, N it (z)
) = ∑ z I{NiT+1(z)>0} + NiT+1(z)−1∑ j=1 1√ j
≤ ∑ z ( I{NiT+1(z)>0} + 2 √ N iT+1(z) ) ≤ 3 ∑ z √ N iT+1(z).
Since ∑ z N i T+1(z) ≤ T , we have
3 ∑ z √ N iT+1(z) ≤ 3 √ SN ∑ z N iT+1(z) = 3 √ SNT . (33)
With equation 32 and equation 33 we get
2H KT∑ k=1 tk+1−1∑ t=tk βik (zt) ≤ 6 √ 56HS √ NT log(NT ) ≤ 48HS √ NT log(NT ).
Then we can bound the equation 30 as follows,
KT∑ k=1 tk+1−1∑ t=tk βik (zt) ≤ 24S √ NT log(NT ) + 2SA(T1 +KT − τ1 − 1). (34)
Choose the δ = 1/T in Lemma 12, and based by Lemma 13, we obtain that
P ( P ik /∈ Bk ) = P ( P i∗ /∈ Bk ) ≤ 1
15Tt6k .
Then we can obtain, 2Eθ∗ [ KT∑ k=1 Tk ( I{θ∗ /∈Bk} + I{θk /∈Bk} )] ≤ 4 15 ∞∑ k=1 t−6k ≤ 4 15 ∞∑ k=1 k−6 ≤ 1. (35)
Therefore we obtain
2HEθ∗ [ KT∑ k=1 Tk ( I{θ∗ /∈Bk} + I{θk /∈Bk} )] ≤ H. (36)
Therefore, we can obtain the bound for one arm as follows, Eθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s′∈S ( P i∗ (s ′ | zt)− P ik (s′ | zt) ) v (s′, θk) )] ≤ H + 4SNH(T1 +KT − τ1 − 1) + 48HS √ NT log(NT ).
(37)
Next we consider the state transition of all arms. Recall that the states of all arms at time t is st. Because every arm evolves independently, then the transition probability from state st to state st+1 is as follows,
P (st+1 | st, θ∗) = N∏ i=1 P i∗ ( sit+1 | sit ) ,
where P i∗ is the true transition functions of arm i. Based by the Lemma 11 and our assumption that all arms have the same state space S, we can obtain∑
st+1 |P (st+1 | st, θ∗)− P (st+1 | st, θk)| ≤ N∑ i ∥∥P i∗ (sit+1 | sit)− P ik (sit+1 | sit)∥∥1 ≤ N ∥∥P i∗ (sit+1 | sit)− P ik (sit+1 | sit)∥∥1 . (38)
Therefore, we can bound the R03 as follows: R03 ≤ NH + 4SN2H(T1 +KT − τ1 − 1) + 48SNH √ NT log(NT ). (39)
Bounding R13. Based on the Proposition 2, we know that∥∥∥b∗t − b̂t∥∥∥ 1 ≤ L1∥R∗ −Rk∥1 + L2 max s ∥P ∗(s, :)− Pk(s, :)∥2 .
Note that the elements in the true transition matrix P ∗ and the sampled matrix Pk is between the interval (0, 1). Then based on the facts about norm, we know that
max s ∥P ∗(s, :)− Pk(s, :)∥2 ≤ ∥P ∗ − Pk∥1 .
Therefore , we can bound the belief error at any time as follows:∥∥∥b∗t − b̂t∥∥∥ 1 ≤ L1∥R∗ −Rk∥1 + L2∥P ∗ − Pk∥1. (40)
Recall in the confidence for Mk, the error bound is the same for ∥R∗ −Rk∥1 and ∥P ∗ − Pk∥1, and based by the bound in equation 34 and equation 35, we can bound the R13 as follows:
R13 ≤ HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk (L1∥R∗ −Rk∥1 + L2∥P ∗ − Pk∥1) ]
≤ (L1 + L2N)HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk ( 2βik (zt) + 2 ( I{P∗ /∈Bk} + I{Pk /∈Bk} ))] ≤ 48(L1 + L2N)SH √ NT log(NT ) + (L1 + L2N)H
4(L1 + L2N)SNH(T1 +KT − τ1 − 1).
(41)
Bounding R23. Based on equation 34 and equation 35, we can bound R23 as follows,
R23 = S 2HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥R∗(· | s)−Rk(· | s)∥1 ]
≤ S2HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ( 2βik (zt) + 2 ( I{R∗ /∈Bk} + I{Rk /∈Bk} ))] ≤ HS2 + 4S3NH(T1 +KT − τ1 − 1) + 48HS3 √ NT log(NT ).
(42)
Combine the bound in equation 39, equation 41 and equation 42, we bound the term R3 as follows: R3 ≤ 48(L1 + L2N)SH √ NT log(NT ) + 4(L1 + L2N)SNH(T1 +KT − τ1 − 1)
+ (L1 + L2N)H +NH + 4SN 2H(T1 +KT − τ1 − 1) + 48SNH √ NT log(NT )
+HS2 + 4S3NH(T1 +KT − τ1 − 1) + 48HS3 √ NT log(NT )
= 48(L1 + L2N +N + S 2)SH √ NT log(NT ) + (L1 + L2N +N + S 2)H
+ 4(L1 + L2N +N 2 + S2)SNH(T1 +KT − τ1 − 1).
(43)
B.4 BOUND R4
Lemma 9. R4 satisfies the following bound R4 ≤ 48(L1 + L2N +N + S2)Srmax √ NT log(NT ) + (L1 + L2N +N + S 2)rmax
+ 4(L1 + L2N +N + S 2)SArmax(T1 +KT − | 1. What is the focus of the paper regarding the online Restless Bandit problem?
2. What are the strengths of the proposed Thompson Sampling based learning algorithm?
3. What are the weaknesses of the paper, particularly in terms of model simplicity and experiment comparisons?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper considers the online Restless Bandit (RMAB) problem, where each arm of the RMAB is a Markov chain. The state space of each arm can be potentially unique, the states are assumed to be unobserved and the reward function is unknown. The goal is to design efficient algorithms that determine which arm to pull each round, so as to minimize the accumulated regret.
This paper proposes a Thompson Sampling based learning algorithm called TSEETC, which operates using alternating explore and exploit stages within each episode. The key contribution of the paper lies in establishing a Bayesian regret bound of O(\sqrt(T)). Finally the paper also presents proof-of-concept empirical experiments to corroborate the theoretical results.
Strengths And Weaknesses
Strengths
– I think the key strength of the paper lies in establishing the O(\sqrt(T)) regret bound, which as the paper describes is an improvement over known bounds in the literature for RMABs. However, I think this positive also comes with the caveat that the RMAB considered here is rather simple with each arm representing a Markov Chain rather than an MDP (more on this under “weaknesses”)
– The paper is very well written and is pleasurable to read. The coverage of existing related work is excellent and I believe the paper does a great job of pointing out limitations of previous work and the new contributions.
Weaknesses
– Model: Markov Chains While the paper does a good job of highlighting the limitations of previous work and how all the previously known bounds are weaker than the one proposed, I think a key factor is that the setting considered makes it considerably simple: the RMAB arms are Markov chains, whose state transition probabilities are unaffected by actions taken. In most literature on RMABs, each arm is typically modeled as an MDP, which makes action planning considerably difficult.
– The empirical experiments could compare against more interesting baselines: For example, the paper mentions the Liu et. al. 2010 paper in related work saying how their log(T) regret doesn’t mean much. However, their setting seems most relevant as they also consider RMAB arms with Markov chain. I’d be interested in checking how their algorithm compares against the proposed algorithm.
– Although generally well written, the paper has several typos/missing words which need cleaning up (for eg: Page 2 – exiting, show that outperforms, Page 3 – methods to unknown states, reward functions)
Clarity, Quality, Novelty And Reproducibility
The paper is clear and easy to read. The novelty and comparison against existing literature is clear and seems comprehensive. |
ICLR | Title
ONLINE RESTLESS BANDITS WITH UNOBSERVED STATES
Abstract
We study the online restless bandit problem, where each arm evolves according to a Markov chain independently, and the reward of pulling an arm depends on both the current state of the corresponding Markov chain and the action. The agent (decision maker) does not know the transition kernels and reward functions, and cannot observe the states of arms even after pulling. The goal is to sequentially choose which arms to pull so as to maximize the expected cumulative rewards collected. In this paper, we propose TSEETC, a learning algorithm based on Thompson Sampling with Episodic Explore-Then-Commit. The algorithm proceeds in episodes of increasing length and each episode is divided into exploration and exploitation phases. In the exploration phase in each episode, action-reward samples are collected in a round-robin way and then used to update the posterior as a mixture of Dirichlet distributions. At the beginning of the exploitation phase, TSEETC generates a sample from the posterior distribution as true parameters. It then follows the optimal policy for the sampled model for the rest of the episode. We establish the Bayesian regret bound Õ( √ T ) for TSEETC, where T is the time horizon. This is the first bound that is close to the lower bound of restless bandits, especially in an unobserved state setting. We show through simulations that TSEETC outperforms existing algorithms in regret.
N/A
√ T ) for TSEETC, where T is the time
horizon. This is the first bound that is close to the lower bound of restless bandits, especially in an unobserved state setting. We show through simulations that TSEETC outperforms existing algorithms in regret.
1 INTRODUCTION
The restless multi-armed problem (RMAB) is a general setup to model many sequential decision making problems ranging from wireless communication (Tekin & Liu, 2011; Sheng et al., 2014), sensor/machine maintenance (Ahmad et al., 2009; Akbarzadeh & Mahajan, 2021) and healthcare (Mate et al., 2020; 2021). This problem considers one agent and N arms. Each arm i is modulated by a Markov chain M i with state transition function P i and reward function Ri. At each time, the agent decides which arm to pull. After the pulling, all arms undergo an action-dependent Markovian state transition. The goal is to decide which arm to pull to maximize the expected reward, i.e., E[ ∑T t=1 rt], where rt is the reward at time t and T is the time horizon.
In this paper, we consider the online restless bandit problem with unknown parameters (transition functions and reward functions) and unobserved states. Many works concentrate on learning unknown parameters (Liu et al., 2010; 2011; Ortner et al., 2012; Wang et al., 2020; Xiong et al., 2022a;b) while ignoring the possibility that the states are also unknown. The unobserved states assumption is common in real-world applications, such as cache access (Paria & Sinha, 2021) and recommendation system (Peng et al., 2020). In the cache access problem, the user can only get the perceived delay but cannot know whether the requested content is stored in the cache before or after the access. Moreover, in the recommender system, we do not know the user’s preference for the items. There are also some studies that consider the unobserved states. However, they often assume the parameters are known (Mate et al., 2020; Meshram et al., 2018; Akbarzadeh & Mahajan, 2021) and there is a lack of theoretical result (Peng et al., 2020; Hu et al., 2020). And the existing algorithms (Zhou et al., 2021; Jahromi et al., 2022) with theoretical guarantee do not match the lower regret bound of RMAB (Ortner et al., 2012).
One common way to handle the unknown parameters but with observed states is to use the optimism in the face of uncertainty (OFU) principle (Liu et al., 2010; Ortner et al., 2012; Wang et al., 2020). The regret bound in these works is too weak sometimes, because the baseline they consider, such
as pulling the fixed arms (Liu et al., 2010), is not optimal in RMAB problem. Ortner et al. (2012) derives the lower bound Õ( √ T ) for RMAB problem. However, it is not clear whether there is an efficient computational method to search out the optimistic model in the confidence region (Lakshmanan et al., 2015). Another way to estimate the unknown parameters is Thompson Sampling (TS) method (Jung & Tewari, 2019; Jung et al., 2019; Jahromi et al., 2022; Hong et al., 2022). TS algorithm does not need to solve all instances that lie within the confident sets as OFU-based algorithms (Ouyang et al., 2017). What’s more, empirical studies suggest that TS algorithms outperform OFUbased algorithms in bandit and Markov decision process (MDP) problems (Scott, 2010; Chapelle & Li, 2011; Osband & Van Roy, 2017).
Some studies assume that only the states of pulled arms are observable (Mate et al., 2020; Liu & Zhao, 2010; Wang et al., 2020; Jung & Tewari, 2019). They translate the partially observable Markov decision process (POMDP) problem into a fully observable MDP by regarding the state last observed and the time elapsed as a meta-state (Mate et al., 2020; Jung & Tewari, 2019), which is much simpler due to more observations about pulled arms. Mate et al. (2020), and Liu & Zhao (2010) derive the optimal index policy but they assume the known parameters. Restless-UCB in Wang et al. (2020) achieves the regret bound of Õ(T 2/3), which does not match the lower bound Õ( √ T ) regret, and also restricted to a specific Markov model. There are also some works that consider that the arm’s state is not visible even after pulling (Meshram et al., 2018; Akbarzadeh & Mahajan, 2021; Peng et al., 2020; Hu et al., 2020; Zhou et al., 2021; Yemini et al., 2021) and the classic POMDP setting (Jahromi et al., 2022). However, there are still some challenges unresolved. Firstly, Meshram et al. (2018) and Akbarzadeh & Mahajan (2021) study the RMAB problem with unobserved states but with known parameters. However, the true value of the parameters are often unavailable in practice. Secondly, the works study RMAB from a learning perspective, e.g., Peng et al. (2020); Hu et al. (2020) but there are no regret analysis. Thirdly, existing policies with regret bound Õ(T 2/3) (Zhou et al., 2021; Jahromi et al., 2022) often do not have a regret guarantee that scales as Õ( √ T ), which is the lower bound in RMAB problem (Ortner et al., 2012). Yemini et al. (2021) considers the arms are modulated by two unobserved states and with linear reward. This linear structure is quite a bit of side information that the decision maker can take advantage of for decision making and problem-dependent log(T ) is given.
To the best of our knowledge, there are no provably optimal policies that perform close to the offline optimum and match the lower bound in restless bandit, especially in unobserved states setting. The unobserved states bring much challenges to us. Firstly, we need to control estimation error about states, which itself is not directly observed. Secondly, the error depends on the model parameters in a complex way via Bayesian updating and the parameters are still unknown. Thirdly, since the state is not fully observable, the decision-maker cannot keep track of the number of visits to state-action pairs, a quantity that is crucial in the theoretical analysis. We design a learning algorithm TSEETC to estimate these unknown parameters, and benchmarked on a stronger oracle, we show that our algorithm achieves a tighter regret bound. In summary, we make the following contributions:
Problem formulation. We consider the online restless bandit problems with unobserved states and unknown parameters. Compared with Jahromi et al. (2022), our reward functions are unknown.
Algorithmic design. We propose TSEETC, a learning algorithm based on Thompson Sampling with Episodic Explore-Then-Commit. The whole learning horizon is divided into episodes of increasing length. Each episode is split into exploration and exploitation phases. In the exploration phase, to estimate the unknown parameters, we update the posterior distributions about unknown parameters as a mixture of Dirichlet distributions. For the unobserved states, we use the belief state to encode the historical information. In the exploitation phases, we sample the parameters from the posterior distribution and derive an optimal policy based on the sampled parameter. What’s more, we design the determined episode length in an increasing manner to control the total episode number, which is crucial to bound the regret caused by exploration.
Regret analysis. We consider a stronger oracle which solves POMDP based on our belief state. And we define the pseudo-count to store the state-action pairs. Under a Bayesian framework, we show that the expected regret of TSEETC accumulated up to time T is bounded by Õ( √ T ) , where
Õ hides logarithmic factors. This bound improves the existing results (Zhou et al., 2021; Jahromi et al., 2022).
Experiment results. We conduct the proof-of-concept experiments, and compare our policy with existing baseline algorithms. Our results show that TSEETC outperforms existing algorithms and achieve a near-optimal regret bound.
2 RELATED WORK
We review the related works in two main domains: learning algorithm for unknown parameters, and methods to identify unknown states.
Unknown parameters. Since the system parameters are unknown in advance, it is essential to study RMAB problems from a learning perspective. Generally speaking, these works can be divided into two categories: OFU (Ortner et al., 2012; Wang et al., 2020; Xiong et al., 2022a; Zhou et al., 2021; Xiong et al., 2022b) or TS based (Jung et al., 2019; Jung & Tewari, 2019; Jahromi et al., 2022; Hong et al., 2022). The algorithms based on OFU often construct confidence sets for the system parameters at each time, find the optimistic estimator that is associated with the maximum reward, and then select an action based on the optimistic estimator. However, these methods may not perform close to the offline optimum because the baseline policy they consider, such as pulling only one arm, is often a heuristic policy and not optimal. In this case, the regret bound O(log T ) (Liu et al., 2010) is less meaningful. Apart from these works, posterior sampling (Jung & Tewari, 2019; Jung et al., 2019) were used to solve this problem. A TS algorithm generally samples a set of MDP parameters randomly from the posterior distribution, then actions are selected based on the sampled model. Jung & Tewari (2019) and Jung et al. (2019) provide theoretical guarantee Õ( √ T ) in the Bayesian setting. TS algorithms are confirmed to outperform optimistic algorithms in bandit and MDP problems (Scott, 2010; Chapelle & Li, 2011; Osband & Van Roy, 2017).
Unknown states. There are some works that consider the states of the pulled arm are observed (Mate et al., 2020; Liu & Zhao, 2010; Wang et al., 2020; Jung & Tewari, 2019). Mate et al. (2020) and Liu & Zhao (2010) assumes the unobserved states but with known parameters. Wang et al. (2020) constructs an offline instance and give the regret bound Õ(T 2/3). Jung & Tewari (2019) considers the episodic RMAB problems and the regret bound Õ( √ T ) is guaranteed in the Bayesian setting. Some studies assume that the states are unobserved even after pulling. Akbarzadeh & Mahajan (2021) and Meshram et al. (2018) consider the RMAB problem with unknown states but known system parameters. And there is no regret guarantee. Peng et al. (2020) and Hu et al. (2020) consider the unknown parameters but there are also no any theoretical results. The most similar to our work is Zhou et al. (2021) and Jahromi et al. (2022). Zhou et al. (2021) considers that all arms are modulated by a common unobserved Markov Chain. They proposed the estimation method based on spectral method (Anandkumar et al., 2012) and learning algorithm based on upper confidence bound (UCB) strategy (Auer et al., 2002). They also give the regret bound Õ(T 2/3) and there is a gap between the lower bound Õ( √ T ) (Ortner et al., 2012). Jahromi et al. (2022) considers the POMDP setting and propose the pseudo counts to store the state-action pairs. Their learning algorithm is based on Ouyang et al. (2017) and the regret bound is also Õ(T 2/3). And their algorithm is not programmable due to the pseudo counts is conditioned on the true counts which is uncountable.
3 PROBLEM SETTING
Consider a restless bandit problem with one agent and N arms. Each arm i ∈ [N ] := {1, 2, . . . , N} is associated with an independent discrete–time Markov chainMi = (Si, P i), where Si is the state space and P i ∈ RSi×Si the transition functions. Let sit denote the state of arm i at time t and st = (s 1 t , s 2 t , . . . , s N t ) the state of all arms. Each arm i is also associated with a reward functions Ri ∈ RSi×R, where Ri (r | s) is the probability that the agent receives a reward r ∈ R when he pulls arm i in state s. We assume the state spaces Si and the reward set R are finite and known to the agent. The parameters P i and Ri, i ∈ [N ] are unknown, and the state st is also unobserved to the agent. For the sake of notational simplicity, we assume that all arms have the same state spaces S with size S. Our result can be generalized in a straightforward way to allow different state spaces. The whole game is divided into T time steps. The initial state si1 for each arm i ∈ [N ] is drawn from a distribution hi independently, which we assume to be known to the agent. At each time t, the agent chooses one arm at ∈ [N ] to pull and receives a reward rt ∈ R with probability Rat(rt | satt ). Note
that only the pulled arm has the reward feedback. His decision on which arm at to pull is based on the observed history Ht = [a1, r1, a2, r2 · · · , at−1, rt−1]. Note that the states of the arms are never observable, even after pulling. Each arm i makes a state transition independently according to the associated P i, whether it is pulled or not. This process continues until the end of the game. The goal of the agent is to maximize the total expected reward.
We use θ to denote the unknown P i and Ri for i ∈ [N ] collectively. Since the true states are unobservable, the agent maintains a belief state bit = [b i t(s, θ), s ∈ Si] ∈ ∆Si for each arm i, where
bit(s, θ) := P ( sit = s | Ht, θ ) ,
and ∆Si := { b ∈ RSi+ : ∑ s∈Si b(s) = 1 } is the probability simplex in RSi . Note that bit(s, θ) depends on the unknown model parameter θ, which itself has to be learned by the agent. We aggregate all arms as a whole Markov chainM and denote its transition matrix and reward function as P and R, respectively. For a given θ, the overall belief state bt = (b1t , b 2 t , · · · , bNt ) is a sufficient statistic for Ht−1 (Smallwood & Sondik, 1973), so the agent can base his decision at time t on bt only. Let ∆b := ∆S1 × · · · ×∆SN . A deterministic stationary policy π : ∆b → [N ] maps a belief state to an action. The long-term average reward of a policy π is defined as
Jπ(h, θ) := lim sup T→∞
1 T E [ T∑ t=1 rt ∣∣∣ h, θ] . (1) We use J(h, θ) = supπ J
π(h, θ) to denote the optimal long-term average reward. We assume J(h, θ) is independent of the initial distribution h as in Jahromi et al. (2022) and denoted it by J(θ). We make the following assumption. Assumption 1. The smallest element ϵ1 in the transition functions P i, i ∈ N is bigger than zero. Assumption 2. The smallest element ϵ2 in the reward functions Ri, i ∈ N is bigger than zero.
Assumption 1 and Assumption 2 are strong in general, but they help us bound the error of belief estimation (De Castro et al., 2017). Assumption 1 also makes the MDP weakly communicating (Bertsekas et al., 2011). For weakly communicating MDP, it is known that there exists a bounded function v(·, θ) : ∆b → R such that for all b ∈ ∆b (Bertsekas et al., 2011),
J(θ) + v(b, θ) = max a
{ r(b, a) +
∑ r P (r | b, a, θ)v (b′, θ)
} , (2)
where v is the relative value function, r(b, a) = ∑ s ∑ r b a(s, θ)Ra(r | s)r is the expected reward, b′ is the updated belief after obtaining the reward r, and P (r | b, a, θ) is the probability of observing r in the next step, conditioned on the current belief b and action a. The corresponding optimal policy is the maximizer of the right part in equation 2. Since the value function v(, θ) is finite, we can bound the span function sp(θ) := maxb v(b, θ) −minb v(b, θ) as Zhou et al. (2021). We show the details about this bound in Proposition 1 and denote the bound as H .
We consider the Bayesian regret. The parameters θ∗ is randomly generated from a known prior distribution Q at the beginning and then fixed but unknown to the agent. We measure the efficiency of a policy π by its regret, defined as the expected gap between the cumulative reward of an offline oracle and that of π, where the oracle is the optimal policy with the full knowledge of θ∗, but unknown states. The offline oracle is similar to Zhou et al. (2021), which is stronger than those considered in Azizzadenesheli et al. (2016) and Fiez et al. (2018). We focus on the Bayesian regret of policy π (Ouyang et al., 2017; Jung & Tewari, 2019) as follows,
RT := Eθ∗∼Q [ T∑ t=1 (J(θ∗)− rt) ] . (3)
The above expectation is with respect to the prior distribution about θ∗, the randomness in state transitions and the random reward.
4 THE TSEETC ALGORITHM
In section 4.1, we define the belief state and show how to update it with new observation. In section 4.2, we show how to update the posterior distributions under unknown states. In section 4.3, we show the details about our learning algorithm TSEETC.
4.1 BELIEF ENCODER FOR UNOBSERVED STATE
Here we focus on the belief update for arm i with true parameters θ∗. At time t, the belief for arm i in state s is bit(s, θ
∗). Then after the pulling of arm i, we obtain the observation rt. The belief bit(s ′, θ∗) can be update as follows:
bit+1(s ′, θ∗) =
∑ s b i t(s, θ
∗)Ri∗ (rt | s)P i∗(s′ | s)∑ s b i t(s, θ ∗)Ri∗ (rt | s) , (4)
where the P i∗(s ′ | s) is the probability of transitioning from state s at time t to state s′ andRi∗ (rt | s) is the probability of obtain reward rt under state s.
If the arm i is not pulled, we update its belief as follows:
bit+1(s ′, θ∗) = ∑ s bit(s, θ ∗)P i∗(s ′ | s). (5)
Then at each time, we can aggregate the belief of all arms as bt. Based on equation 2 , we can derive the optimal action at for current belief bt.
4.2 MIXTURE OF DIRICHLET DISTRIBUTION
In this section, we estimate the unknown P i and Ri based on Dirichlet distribution. The Dirichlet distribution is parameterized by a count vector, ϕ = (ϕ1, . . . , ϕk), where ϕi ≥ 0, such that the density of probability distribution is defined as f(p | ϕ) ∝ ∏k i=1 p ϕi−1 i (Ghavamzadeh et al., 2015).
Since the true states are unobserved, all state sequences should be considered, with some weight proportional to the likelihood of each state sequence (Ross et al., 2011). Denote the reward history collected from time t1 till t2 for arm i as rit1:t2 and similarly the states history is denoted as s i t1:t2 . And the belief state history is denoted as bit1:t2 . Then with these history information, the posterior distribution gt(P i) and gt(Ri) at time t can be updated as in Lemma 1. Lemma 1. Under the unobserved state setting and assuming transition function P i with prior g0 ( P i ) = f(P
i−ϵ11 1−ϵ1 | ϕ
i) , reward function Ri with prior g0 ( Ri ) = f(R
i−ϵ21 1−ϵ2 | ψ i), with the information ri0:t and b i 0:t , then the posterior distribution are as follows:
gt ( P i ) ∝ ∑ s̄it∈Sti g0 ( P i ) w(si0:t) ∏ s,s′ ( P i(s′ | s)− ϵ1 1− ϵ1 )N i s,s′(s̄ i t)+ϕ i s,s′−1, (6)
gt ( Ri ) ∝ ∑ s̄it∈Sti g0 ( Ri ) w(si0:t) ∏ s,r ( Ri(r | s)− ϵ2 1− ϵ2 )N i s,r(s̄ i t)+ψ i s,r−1. (7)
where w(si0:t) is the likelihood of state sequence s i 0:t and 1 is the vector with one in each position.
The element 1 can be different lengths in correspondence with the dimension of P and R. This procedure is summarized in Algorithm 1.
Algorithm 1 Posterior Update for Ri(s, ·) and P i(s, ·) 1: Input: the history length τ1, the state space Si, the belief history bi0:τ1 , the reward history r i 0:τ1 ,
the initial parameters ϕis,s′ , ψ i s,r, for s, s ′ ∈ Si, r ∈ R, 2: generate Sτ1i possible state sequences 3: calculate the weight w(j) = ∏τ1 t=1 b i t(s, θ), j ∈ S τ1 i 4: for j in 1, . . . ,Sτ1i do 5: count the occurence times of event (s, s′) and (s, r) as N is,s′ , N i s,r in sequence j 6: update ϕis,s′ ← ϕis,s′ +N is,s′ , ψis,r ← ψis,r +N is,r 7: aggregate the ϕis,s′ as ϕ(j), ψ i s,r as ψ(j) for all s, s
′ ∈ Si, r ∈ R 8: end for 9: update the mixture Dirichlet distribution gτ1(P i) ∝ ∑Sτ1i j=1 w(j)f( P i−ϵ11 1−ϵ1 | ϕ(j)),
gτ1(R i) ∝ ∑Sτ1i j=1 w(j)f( Ri−ϵ21 1−ϵ2 | ψ(j))
With Algorithm 1, we can update the posterior distribution about the unknown parameters and sample from the posterior distribution as true parameters. The belief estimation error can be bounded by the distance between the sampled parameters and the true values (Proposition 2 ). The theoretical guarantee about estimation errors about unknown parameters is provided in Lemma 2.
4.3 OUR ALGORITHM
Our algorithm, TSEETC, operates in episodes with different lengths. Each episode is split into exploration phase and exploitation phase. Denote the episode number is KT and the first time in each episode is denoted as tk. We use Tk to denote the length of episode k and it can be determined as: Tk = T1 + k − 1, where T1 = ⌈√ T+1 2 ⌉ . The length of exploration phase in each episode is fixed as τ1 which satisfies τ1KT = O( √ T ) and τ1 ≤ T1+KT−12 . With these notations, our whole algorithm is shown below.
Algorithm 2 Thompson Sampling with Episodic Explore-Then-Commit 1: Input: prior g0(P ),g0(R), initial belief b0, exploration length τ1, the first episode length T1 2: for episode k = 1, 2, . . . , do 3: start the first time of episode k, tk := t 4: generate R(tk) ∼ gtk−1+τ1(R) and P (tk) ∼ gtk−1+τ1(P ) 5: for t = tk, tk + 1, ..., tk + τ1 do 6: pull the arm i for τ1/N times in a round robin way 7: receive the reward rt 8: update the belief bit using R(tk), P (tk) based on equation 4 9: update the belief bjt , j ∈ N \ {i} using P (tk) based on equation 5 10: end for 11: for i = 1, 2, . . . , N do 12: input the obtained rt1:t1+τ1 , ..., rtk:tk+τ1 , bt1:t1+τ1 , ..., btk:tk+τ1 to Algorithm 1 to update the posterior distribution gtk+τ1(P ), gtk+τ1(R) 13: end for 14: generate R(tk + τ1) ∼ gtk+τ1(P ), P (tk + τ1) ∼ gtk+τ1(R) 15: for i in 0, 1, . . . , N do 16: re-update the belief bit from time 0 to tk + τ1 based on R(tk + τ1) and P (tk + τ1) 17: end for 18: compute π∗k(·) = Oracle (·, R(tk + τ1), P (tk + τ1)) 19: for t = tk + τ1 + 1, · · · , tk+1 − 1 do 20: apply action at = π∗k (bt) 21: observe new reward rt+1 22: update the belief bt of all arms based equation 4, equation 5 23: end for 24: end for
In episode k, for the exploration phase, we first sampled the θtk from the distribution gtk−1+τ1(P ) and gtk−1+τ1(R). We pull each arm for τ1/N times in a round robin way. For the pulled arm, we update its belief based on equation 4 using θtk . For the arms that are not pulled, we update its belief based on equation 5 using θtk . The reward and belief history of each arm are input into Algorithm 1 to update the posterior distribution after the exploration phase. Then we sample the new θtk+τ1 from the posterior distribution, and re-calibrate the belief bt based on the most recent estimated θtk+τ1 . Next we enter into the exploitation phase . Firstly we derive the optimal policy πk for the sampled parameter θtk+τ1 . Then we use policy πk for the rest of the episode k.
We control the increasing of episode length in a deterministic manner. Specially, the length for episode k is just one more than the last episode k. In such a deterministic increasing manner, the episode number KT is bounded by O( √ T ) as in Lemma 10. Then the regret caused by the
exploration phases can be bound by O( √ T ), which is an crucial part in Theorem 1.
In TSEETC, for the unknown states, we propose the belief state to estimate the true states. What’s more, under the unobserved state setting, we consider all possible state transitions and update the
posterior distribution of unknown parameters as mixture of each combined distribution, in which each occurence is summed with different weight. Remark 1. We use an Oracle to derive the optimal policy for the sampled parameters in Algorithm 2. The Oracle can be the Bellman equation for POMDP as we introduced in equation 2, or the approximation methods (Pineau et al., 2003; Silver & Veness, 2010) , etc. The approximation error is discussed in Remark 3.
5 PERFORMANCE ANALYSIS
In section 5.1, we show our theoretical results and some discussions. In section 5.2, we provide a proof sketch and the detailed proof is in Appendix B.
5.1 REGRET BOUND AND DISCUSSIONS
Theorem 1. Suppose Assumptions 1,2 hold and the Oracle returns the optimal policy in each episode. The Bayesian regret of our algorithm satisfies
RT ≤ 48C1C2S √ NT log(NT ) + (τ1∆R+H + 4C1C2SN) √ T + C1C2,
where C1 = L1 + L2N +N2 + S2, C2 = rmax +H are constants independent with time horizon T , L1 =
4(1−ϵ1)2 Nϵ21ϵ2 , L2 = 4(1−ϵ1)2 ϵ31 , ϵ1 and ϵ2 are the minimum elements of the functions P ∗ and R∗,
respectively. τ1 is the fixed exploration length in each episode, ∆R is the biggest gap of the reward obtained at each two different time, H is the bounded span, rmax is the maximum reward obtain each time, N is the number of arms and S is the state size for each arm. Remark 2. The Theorem 1 shows that the regret of TSEETC is upper bound by Õ( √ T ). This is the first bound that matches the lower bound in restless bandit problem (Ortner et al., 2012) in such unobserved state setting. Although TSEETC looks similar to explore-then-commit (Lattimore & Szepesvári, 2020), a key novelty of TSEETC lies in using the approach of posterior sampling to update the posterior distribution of unknown parameters as the mixture of each combined distribution. Our algorithm balances exploration and exploitation in a deterministic-episode manner and ensures the episode length grows at a linear rate, which guarantees that the total episode number is bounded by O( √ T ). Therefore the total regret caused by exploration is well controlled by O( √ T ) and this is better than the bound O(T 2/3) in Zhou et al. (2021). What’s more, in the exploitation phase, our regret bound Õ( √ T ) is also better than Õ(T 2/3) (Zhou et al., 2021). This shows our posterior sampling based method is superior to UCB based solution (Osband & Van Roy, 2017). In Jahromi et al. (2022), their pseudo count of state-action pair is always smaller than the true counts with some probability at any time. However, in our algorithm, the sampled parameter is more concentrated on true values with the posterior update. Therefore, our pseudo count (defined in equation 13) based on belief approximates the true counts more closely, which helps us obtain a tighter bound.
5.2 PROOF SKETCH
In our algorithm, the total regret can be decomposed as follows:
RT = Eθ∗ [ kT∑ k=1 tk+τ1∑ tk J(θ∗)− rt ] ︸ ︷︷ ︸
Regret (A)
+Eθ∗ [ kT∑ k=1 tk+1−1∑ tk+τ1+1 J(θ∗)− rt ] ︸ ︷︷ ︸
Regret (B)
. (8)
Bounding Regret (A). The Regret (A) is the regret caused in the exploration phase of each episode. This term can be simply bounded as follows:
Regret (A) ≤ Eθ∗ [ kT∑ k=1 τ1∆R ] ≤ τ1∆RkT (9)
where ∆R = rmax − rmin is the biggest gap of the reward received at each two different times. The regret in equation 9 is related with the episode number kT , which can be bounded asO( √ T ) in Lemma 10.
Bounding Regret (B). Next we bound Regret(B) in the exploitation phase. Define b̂t is the belief updated with parameter θk and b∗t represents the belief with θ
∗. During episode k, based on equation 2 for the sampled parameter θk and that at = π∗(b̂t), we can write:
J (θk) + v(b̂t, θk) = r(b̂t, at) + ∑ r P (r | b̂t, at, θk)v(b′, θk). (10)
With this equation, we proceed by decomposing the regret as:
Regret(B) = R1 +R2 +R3 +R4 (11)
where each term is defined as follows:
R1 = Eθ∗ kT∑ k=1 [(Tk − τ1 − 1) (J(θ∗)− J(θk))] ,
R2 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( v(b̂t+1, θk)− v(b̂t, θk) )] ,
R3 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 (∑ r P [ r | b̂t, at, θk ] v(b′, θk)− v(b̂t+1, θk) )] ,
R4 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( r(b̂t, at)− r(b∗t , at) )] .
Bounding R1. One key property of Posterior Sampling algorithms is that for given the historyHtk , the true parameter θ∗ and sampled θk are identically distributed at the time tk as stated in Lemma 13. Due to the length Tk determined and independent with θk, then R1 is zero thanks to this key property.
Bounding R2. The regret R2 is the telescopic sum of value function and can be bounded as R2 ≤ HKT . It solely depends on the episode number and the upper boundH of span function. As a result, R2 reduce to a finite bound over the number of episodes kT , which can be bounded in Lemma 10.
BoundingR3 andR4. The regret termsR3 andR4 is related with estimation error about θ. Thus we should bound the parameters’ error especially in our unobserved state setting. Recall the definition of ϕ, ψ, we can define the posterior mean of P̂ i(s′ | s) and R̂i(r | s) for arm i at time t as follows:
P̂ i(s′ | s)(t) = ϵ1 + (1− ϵ1)ϕis,s′(t) Sϵ1 + (1− ϵ1) ∥∥ϕis,·(t)∥∥1 , R̂i(r | s)(t) = ϵ2 + (1− ϵ2)ψis,r(t) Sϵ2 + (1− ϵ2) ∥∥ψis,·(t)∥∥1 . (12) We also define the pesudo count of the state-action pair (s, a) before the episode k as
N itk(s, a) = ∥∥ψis,·(tk)∥∥1 − ∥∥ψis,·(0)∥∥1 (13)
where ψis,·(tk) represents the count of state-action z = (s, a) pair before the episode k. Let Mik be the set of plausible MDPs in episode k with reward function R (r | z) and transition function P (s′ | z) satisfying,∑
s′∈S ∣∣∣P (s′ | z)− P̂ ik (s′ | z)∣∣∣ ≤ βk(z), ∑ r∈R ∣∣∣R (r | z)− R̂ik (r | z)∣∣∣ ≤ βk(z), (14) where βik(s, a) := √ 14S log(2NtkT )
max{1,Nitk (s,a)} is chosen conservatively (Auer et al., 2008) so thatMik con-
tains both P i∗ and P i k, R i ∗ and R i k with high probability. P i ∗ and R i ∗ are the true parameters as we defined in section 4.1. Specially, for the unobserved state setting, the belief error under different parameters is upper bounded by the gap between the estimators as in Proposition 2. Then the core of the proofs lies in deriving a high-probability confidence set with our pesudo counts and show that the estimated error accumulated to T for each arm is bounded by √ T . Then with the error bound for each arm, we can derive the final error bound about the MDP aggregated by all arms as stated in Lemma 2 .
Lemma 2. (estimation errors) The total estimation error about transition functions accumulated by all exploitation phases satisfies the following bound
Eθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥P ∗ − Pk∥1 ] ≤ 48SN √ NT log(NT ) + 4SN2 √ T +N. (15)
The Lemma 2 shows that the accumulated error is bounded by O( √ T ), which is crucial to obtain the final bound as the observed-states setting (Ortner et al., 2012; Jung & Tewari, 2019). With C1 = L1 + L2N + N
2 + S2, We show the final bound about R3, R4 and the detailed proof in Appendix B.3,B.4. Lemma 3. R3 satisfies the following bound
R3 ≤ 48C1SH √ NT logNT + 4C1SNH √ T + C1H.
Lemma 4. R4 satisfies the following bound R4 ≤ 48C1Srmax √ NT log(NT ) + 4C1SNrmax √ T + C1rmax.
6 NUMERICAL EXPERIMENTS
In this section, we present proof-of-concept experiments and approximately implement TSEETC . We consider two arms and there are two hidden states for each arm. We pull just one arm each time. The learning horizon T = 50000, and each algorithm runs 100 iterations. The transition functions and reward functions for all arms are the same. We initialize the algorithm with uninformed Dirichlet prior on the unknown parameters. We compare our algorithm with simple heuristics ϵ-greedy (Lattimore & Szepesvári, 2020) (ϵ = 0.01), and Sliding-Window UCB (Garivier & Moulines, 2011) with specified window size, RUCB (Liu et al., 2010), Q-learning (Hu et al., 2020) and SEEU (Zhou et al., 2021). The results are shown in Figure 1. We can find that TSEETC has the minimum regret among these algorithms.
In Figure 2, we plot the cumulative regret versus T of the six algorithms in log-log scale. We observe that the slopes of all algorithms except for our TSEETC and SEEU are close to one, suggesting that they incur linear regrets. What is more, the slope of TSEETC is close to 0.5, which is better than SEEU. This is consistent with our theoretical result.
7 CONCLUSION
In this paper, we consider the restless bandit with unknown states and unknown dynamics. We propose the TSEETC algorithm to estimate these unknown parameters and derive the optimal policy. We also establish the Bayesian regret of our algorithm as Õ( √ T ) which is the first bound that matches the lower bound especially in restless bandit problems with unobserved states . Numerical results validate that the TSEETC algorithm outperforms other learning algorithms in regret. A related open question is whether our method can be applied to the setting where the transition functions are action dependent. We leave it for future research.
A TABLE OF NOTATIONS
Notation Description T The length of horizon KT The episode number of time T Tk The episode length of episode k τ1 The fixed exploration length in each episode P i The transition functions for arm i Ri The reward function for arm i Pk The sampled transition function for aggregated MDP Rk The sampled reward function for aggregated MDP rt The reward obtained at time t
bit(s, θ) The belief state for being in state s at time t for arm i with parameter θ b̂t The belief of all arms at time t with parameter θk b∗t The belief of all arms at time t with parameter θ ∗
at The action at time t J(θk) The optimal long term average reward with parameter θk rmax The maximum reward obtained each time rmin The minimum reward obtained each time ∆R The biggest gap of the obtained reward
B PROOF OF THEOREM 1
Recall that our goal is to minimize the regret : RT := Eθ∗ [ T∑ t=1 (J(θ∗)− rt) ] . (16)
rt depends on the state st and at. Thus rt can be written as r(st, at). Due to Eθ∗ [r (st, at) | Ht−1] = r(b∗t , at) for any t, we have,
RT := Eθ∗ [ T∑ t=1 (J(θ∗)− r(b∗t , at)) ] . (17)
In our algorithm, each episode is split into the exploration and exploitation phase then we can rewrite the regret as:
RT = Eθ∗ [ kT∑ k=1 tk+τ1∑ tk (J(θ∗)− r (b∗t , at)) + kT∑ k=1 tk+1−1∑ tk+τ1+1 (J(θ∗)− r (b∗t , at)) ] , (18)
where τ1 is the exploration length for each episode. τ1 is a constant. tk is the start time of episode k. Define the first part as Regret (A) which is caused by the exploration operations. The another part Regret (B) is as follows.
Regret (A) = Eθ∗ [ kT∑ k=1 tk+τ1∑ tk (J(θ∗)− r (b∗t , at)) ] ,
Regret (B) = Eθ∗ [ kT∑ k=1 tk+1−1∑ tk+τ1+1 (J(θ∗)− r (b∗t , at)) ] .
Recall that the reward set isR and we define the maximum reward gap inR as ∆R = rmax−rmin. Then we get: J(θ∗)− r (b∗t , at) ≤ ∆R. Then Regret (A) can be simply upper bounded as follows:
Regret (A) ≤ Eθ∗ [ kT∑ k=1 τ1∆R ] ≤ τ1∆RkT .
Regret (A) is related with the episode number kT obviously, which is bounded in Lemma 10. Next we should bound the term Regret (B).
During the episode k, based on equation 2, we get: J (θk) + v(b̂t, θk) = r(b̂t, at) + ∑ r P (r | b̂t, at, θk)v (b′, θk) , (19)
where J (θk) is the optimal long-term average reward when the system parameter is θk, b̂t is the belief at time t updated with parameter θk, r(b̂t, at) is the expected reward we can get when the action at is taken for the current belief b̂t, b′ is the updated belief based on equation 4 with parameter θk when the reward r is received.
Using this equation, we proceed by decomposing the regret as:
Regret(B) = R1 +R2 +R3 +R4, (20)
where
R1 = Eθ∗ kT∑ k=1 [(Tk − τ1 − 1) (J(θ∗)− J(θk))] ,
R2 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( v(b̂t+1, θk)− v(b̂t, θk) )] ,
R3 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 (∑ r P (r | b̂t, at, θk)v(b′, θk)− v(b̂t+1, θk) )] ,
R4 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( r(b̂t, at)− r(b∗t , at) )] .
Next we bound the four parts one by one.
B.1 BOUND R1
Lemma 5. R1 satisfies that R1 = 0.
Proof. Recall that:
R1 = Eθ∗ kT∑ k=1 [(Tk − τ1 − 1) (J(θ∗)− J(θk)] .
For each episode, Tk is determined and is independent with θk. Based on Lemma 13, we know that,
Eθ∗ [J(θ∗)] = Eθ∗ [J(θk)].
therefore, the part R1 is 0.
B.2 BOUND R2
Lemma 6. R2 satisfies the following bound
R2 ≤ HKT , where KT is the total number of episodes until time T .
Proof. Recall that R2 is the telescoping sum of value function at time t+ 1 and t.
R2 = Eθ∗ kT∑ k=1
[ tk+1−1∑
t=tk+τ1+1
[ v(b̂t+1, θk)− v(b̂t, θk) ]] . (21)
We consider the whole sum in episode k, then the R2 can be rewrite as:
R2 = Eθ∗ kT∑ k=1 [ v(b̂tk+1 , θk)− v(b̂tk+τ1+1, θk) ] .
Due to the span of v(b, θ) is bounded by H as in proposition 1 , then we can obtain the final bound,
R2 ≤ HKT .
B.3 BOUND R3
In this section, we first rewrite the R3 in section B.3.1. In section B.3.2, we show the details about how to bound R3.
B.3.1 REWRITE R3
Lemma 7. (Rewrite R3 ) The regret R3 can be bounded as follows: R3 ≤ HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||P ∗ − Pk||1 ] +HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||b∗t − b̂t||1 ]
+ S2HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥R∗ −Rk∥1 ] ,
where Pk is the sampled transition functions in episode k, Rk is the sampled reward functions in episode k, b∗t is the belief at time t updated with true P
∗ and R∗, b̂t is the belief at time t updated with sampled Pk, Rk.
Proof. The most part is similar to Jahromi et al. (2022), except that we should handle the unknown reward functions.
Recall that R3 = Eθ∗ ∑kT k=1 [∑tk+1−1 t=tk+τ1+1 (∑ r P (r | b̂t, at, θk)v (b′, θk)− v(b̂t+1, θk) )] .
Recall that Ht is the history of actions and observations prior to action at. Conditioned on Ht, θ∗ and θk, the only random variable in b̂t+1 is rt+1, then we can get,
Eθ∗ [ v(b̂t+1, θk) | Ht, θk ] = ∑ r∈R v (b′, θk)P (r | b∗t , at, θ∗), (22)
where P (r | b∗t , at, θ∗) is the probability of getting reward r given b∗t , at, θ∗. By the law of probability, P (r | b∗t , at, θ∗) can be written as follows,
P (r | b∗t , at, θ∗) = ∑ s′ R∗ (r | s′)P (st+1 = s′ | Ht, θ∗)
= ∑ s′ R∗ (r | s′) ∑ s P ∗ (st+1 = s ′ | st = s,Ht, at, θ∗)P (st = s | Ht, θ∗)
= ∑ s ∑ s′ b∗t (s)P ∗ (s′ | s)R∗ (r | s′) ,
(23) where P ∗ is the transition functions for the MDP aggregated by all arms, R∗ is the reward function for the aggregated MDP. Therefore, we can rewrite the R3 as follows,
R3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r∈R (P (r | b̂t, at, θk)− P (r | b∗t , at, θ∗)v (b′, θk) )] .
Based on equation 23, we get R3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)R ∗ (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )]
= Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )] + Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )] − Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)R ∗ (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )] .
(24)
where Rk is the sampled reward function for aggregated MDP, Pk is the sampled transition function for aggregated MDP.
Define R′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) [∑ s b̂t(s)Pk(s ′ | s)− ∑ s b∗t (s)P ∗(s′ | s) ])] ,
R′′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk) [Rk (r | s′)−R∗ (r | s′)] ∑ s b∗t (s)P ∗(s′ | s) )] .
Bounding R′3. The part R′3 can be bounded as Jahromi et al. (2022). R′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) [∑ s b̂t(s)Pk(s ′ | s)− ∑ s b∗t (s)P ∗(s′ | s)
])] = R′3(0) +R ′ 3(1)
where R′3(0) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)Pk(s ′ | s) )]
R′3(1) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )]
For R′3(0), because ∑ r Rk (r | s′) = 1, ∑ s′ Pk(s ′ | s) = 1,v (b′, θk) ≤ H , we have
R′3(0) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)Pk(s ′ | s) )]
= Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s (b̂t(s)− b∗t (s)Pk(s′ | s) )]
≤ Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s |b̂t(s)− b∗t (s)|Pk(s′ | s) )]
≤ HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s |b̂t(s)− b∗t (s)| )]
= HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∥∥∥b̂t(s)− b∗t (s)∥∥∥ 1 )] ,
where the first inequality is due to b̂t(s) − b∗t (s) ≤ |b̂t(s) − b∗t (s)| and the second inequality is because ∑ r Rk (r | s′) = 1, ∑ s′ Pk(s
′ | s) = 1, v (b′, θk) ≤ H . For the first term inR′3(1) , note that conditioned onHt, θ∗, the distribution of st is b∗t . Furthermore, at is measurable with respect to the sigma algebra generated byHt, θk since at = π∗(b̂t, θk). Thus, we have
Eθ∗ [ v (b′, θk)
∑ s P ∗ (s′ | s) b∗(s) | Ht, θk
] = v (b′, θk)Eθ∗ [P ∗ (s′ | s) | Ht, θk] . (25)
Eθ∗ [ v (b′, θk)
∑ s Pk (s ′ | s) b∗(s) | Ht, θk
] = v (b′, θk)Eθ∗ [Pk (s′ | s) | Ht, θk] . (26)
Substitute equation 25, equation 26 into R′3(1), we have
R′3(1) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) (Pk(s′ | s)− P ∗(s′ | s)) )]
≤ Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) |Pk(s′ | s)− P ∗(s′ | s)| )]
≤ HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s′ |Pk(s′ | s)− P ∗(s′ | s)| )]
≤ HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∥Pk − P ∗∥1) ] ,
where the first inequality is because Pk(s′ | s)− P ∗(s′ | s) ≤ |Pk(s′ | s)− P ∗(s′ | s)|, the second inequality is due to v (b′, θk) ≤ H and ∑ r Rk (r | s′) = 1.
Therefore we obtain the final results,
R′3 ≤ HE [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||P ∗ − Pk||1 ] +HE [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||b∗t − b̂t||1 ] .
Bounding R′′3 . For part R′′3 , note that for any fixed s′, ∑ s b ∗ t (s)P
∗(s′ | s) ≤ S, therefore we can bound R′′3 as follows,
R′′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk) [Rk (r | s′)−R∗ (r | s′)] ∑ s b∗t (s)P ∗(s′ | s) )]
≤ SHEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s′ ∑ r [Rk (r | s′)−R∗ (r | s′)] )]
≤ SHEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 S ∥Rk −R∗∥1 ]
≤ S2HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥Rk −R∗∥1 ] ,
(27) where the first inequality is due to v (b′, θk) ≤ H and ∑ s b ∗ t (s)P
∗(s′ | s) ≤ S , the second inequality is due to for any fixed s′, ∑ r [Rk (r | s′)−R∗ (r | s′)] ≤ ∥Rk −R∗∥1.
B.3.2 BOUND R3
Lemma 8. R3 satisfies the following bound R3 ≤ 48(L1 + L2N +N + S2)SH √ NT log(NT ) + (L1 + L2N +N + S 2)H
+ 4(L1 + L2N +N 2 + S2)SNH(T1 +KT − τ1 − 1).
Proof. Recall that the R3 is as follows:
R3 = Eθ∗ kT∑ k=1
[ tk+1−1∑
t=tk+τ1+1
(∑ r P [r | b̂t, at, θk]v (b′, θk)− v(b̂t+1, θk) )] .
This regret terms are dealing with the model estimation errors. That is to say, they depend on the on-policy error between the sampled transition functions and the true transition functions, the sampled reward functions and the true reward functions. Thus we should bound the parameters’ error especially in our unobserved state setting. Based on the parameters in our Dirichlet distribution, we can define the empirical estimation of reward function and transition functions for arm i as follows:
P̂ i(s′ | s)(t) = ϵ1 + (1− ϵ1)ϕis,s′(t) Sϵ1 + (1− ϵ1) ∥∥ϕis,·(t)∥∥1 , R̂i(r | s)(t) = ϵ2 + (1− ϵ2)ψis,r(t) Sϵ2 + (1− ϵ2) ∥∥ψis,·(t)∥∥1 . (28) where ϕis,s′(t) is the parameters in the posterior distribution of P
i at time t, ψis,r(t) is the parameters in the posterior distribution of Ri at time t. We also define the pseudo count N itk(s, a) of the stateaction pair (s, a) before the episode k for arm i as
N itk(s, a) = ∥∥ψis,·(tk)∥∥1 − ∥∥ψis,·(0)∥∥1 .
For notational simplicity, we use z = (s, a) ∈ S ×A and zt = (st, at) to denote the corresponding state-action pair. Then based on Lemma 7 we can decompose the R3 as follows,
R3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r P [r | b̂t, at, θk]v (b′, θk)− v(b̂t+1, θk) )]
= Eθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 [∑ r ( P (r | b̂t, at, θk)− P (r | b∗t , at, θ∗) ) v(b′, θk) ]] ≤ R03 +R13 +R23
where
R03 = HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥P ∗ − Pk∥1 ] ,
R13 = HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||b∗t − b̂t||1 ] ,
R23 = S 2HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥R∗ −Rk∥1 ] .
Note that the following results are all focused on one arm. Define P i∗ is the true transition function for arm i, P ik is the sampled transition function for arm i. We can extend the results on a arm to the aggregated large MDP based on Lemma 11.
Bounding R03. Since 0 ≤ v (b′, θk) ≤ H from our assumption , each term in the inner summation is bounded by ∑
s′∈S | ( P i∗ (s ′ | zt)− P ik (s′ | zt) ) |v (s′, θk)
≤H ∑ s′∈S ∣∣P i∗ (s′ | zt)− P ik (s′ | zt)∣∣ ≤H
∑ s′∈S ∣∣∣P i∗ (s′ | zt)− P̂ ik (s′ | zt)∣∣∣+H ∑ s′∈S ∣∣∣P ik (s′ | zt)− P̂ ik (s′ | zt)∣∣∣ . where P i∗ (s
′ | zt) is the true transition function, P ik (s′ | zt) is the sampled reward function and P̂ ik (s
′ | zt) is the posterior mean. The second inequality above in due to triangle inequality. Let Mik be the set of plausible MDPs in episode k with reward functionR (r | z) and transition function P (s′ | z) satisfying,∑
s′∈S ∣∣∣P (s′ | z)− P̂ ik (s′ | z)∣∣∣ ≤ βik(z), ∑ r∈R ∣∣∣R (r | z)− R̂ik (r | z)∣∣∣ ≤ βik(z), where βik(s, a) := √ 14S log(2NtkT )
max{1,Nitk (s,a)} is chosen conservatively (Auer et al., 2008) so thatMik con-
tains both P i∗ and P i k, R i ∗ and R i k with high probability. P i ∗ and R i ∗ are the true parameters as we defined in section 4.1. Note that βik(z) is the confidence set with δ = 1/tk. Recall the definition ofψ, we can define the pseudo count of state-action pair (s, a) as N itk(s, a) = ∥∥ψis,·(tk)∥∥1 − ∥∥ψis,·(0)∥∥1. Then we can obtain,∑
s′∈S ∣∣∣P i∗ (s′ | zt)− P̂ ik (s′ | zt)∣∣∣+ ∑ s′∈S ∣∣∣P ik (s′ | zt)− P̂ ik (s′ | zt)∣∣∣ ≤ 2βik (zt) + 2 ( I{P i∗ /∈Bk} + I{P ik /∈Bk} ) .
(29)
We assume the length of the last episode is the biggest. Note that even the assumption does not hold, we can enlarge the sum items as TKT−1 − τ1. This does not affect the order of our regret bound. With our assumption, because the all episode length is not bigger than the last episode, that is tk+1 − 1− (tk + τ1) ≤ TKT − τ1, then we can obtain,
KT∑ k=1 tk+1−1∑ t=tk+τ1 βik (zt) ≤ KT∑ k=1 TkT −τ1∑ t=1 βik (zt) . (30)
Note that ∑ s′∈S ∣∣∣P i∗ (s′ | zt)− P̂ ik (s′ | zt)∣∣∣ ≤ 2 is always true. And with our assumption τ1 ≤ T1+KT−1
2 , it is easy to show that when N i tk ≥ TkT − τ1, βik (zt) ≤ 2 holds. Then we can obtain,
KT∑ k=1 TkT −τ1∑ t=1 min{2, βik (zt)} ≤ KT∑ k=1 TkT −τ1∑ t=1 2I(N itk < TkT − τ1)
+ KT∑ k=1 TkT −τ1∑ t=1 I(N itk ≥ TkT − τ1)
√ 14S log (2NtkT )
max ( 1, N itk (zt)
) . (31)
Consider the first part in equation 31. Obviously, the maximum ofN itk is TkT −τ1. Because there are totally SA state-action pairs, therefore, the first part in equation equation 31 can be bounded as,∑KT k=1 ∑TkT −τ1 t=1 2I(N itk < TkT − τ1) ≤ 2(TkT − τ1)SA. Due to TkT = T1 +KT − 1 and Lemma 10, we get , 2(TkT − τ1)SA = 2(T1 +KT − τ1 − 1)SA = O( √ T ).
Consider the second part in 31. Denote the N it (s, a) is the count of (s, a) before time t(not including t). Due to we just consider the exploration phase in each episode, then N it (s, a) can be calculated as follows,
N it (s, a) = ∣∣{τ < t, τ ∈ [tk, tk + τ1], k ≤ k(t) : (siτ , aiτ) = (s, a)}∣∣ ,
where k(t) is the episode number where the time t is in.
In the second part in equation 31, when N itk ≥ TkT − τ1, based on our assumption τ1 ≤ T1+KT−1
2 , we can get,
τ1 ≤ T1 +KT − 1
2 ,
2τ1 ≤ T1 +KT − 1 = TkT . therefore, TkT − τ1 ≥ τ1. Because N itk ≥ TkT − τ1, then N i tk (s, a) ≥ τ1. For any t ∈ [tk, tk + τ1],we have N it (s, a) ≤ N itk(s, a) + τ1 ≤ 2N i tk (s, a).
Therefore N it (s, a) ≤ 2N itk(s, a). Next we can bound the confidence set when Nt(s, a) ≤ 2Ntk(s, a) as follows,
KT∑ k=1 TkT −τ1∑ t=1 βik (zt) ≤ KT∑ k=1 tk+1−1∑ t=tk
√ 14S log (2NtkT )
max ( 1, N itk (zt) ) ≤
KT∑ k=1 tk+1−1∑ t=tk
√ 14S log (2NT 2)
max ( 1, N itk (zt) ) =
T∑ t=1
√ 28S log (2NT 2)
max ( 1, N it (zt) ) ≤ √ 56S log(2NT )
T∑ t=1 1√ max ( 1, N it (zt) ) .
(32)
where the second inequality in equation 32 is due to tk ≤ T for all episodes and the first equality is due to N it (s, a) ≤ 2N itk(s, a).
Then similar to Ouyang et al. (2017), since N it (zt) is the count of visits to zt, we have T∑ t=1 1√ max ( 1, N it (zt) ) =∑ z T∑ t=1 I{zt=z}√ max ( 1, N it (z)
) = ∑ z I{NiT+1(z)>0} + NiT+1(z)−1∑ j=1 1√ j
≤ ∑ z ( I{NiT+1(z)>0} + 2 √ N iT+1(z) ) ≤ 3 ∑ z √ N iT+1(z).
Since ∑ z N i T+1(z) ≤ T , we have
3 ∑ z √ N iT+1(z) ≤ 3 √ SN ∑ z N iT+1(z) = 3 √ SNT . (33)
With equation 32 and equation 33 we get
2H KT∑ k=1 tk+1−1∑ t=tk βik (zt) ≤ 6 √ 56HS √ NT log(NT ) ≤ 48HS √ NT log(NT ).
Then we can bound the equation 30 as follows,
KT∑ k=1 tk+1−1∑ t=tk βik (zt) ≤ 24S √ NT log(NT ) + 2SA(T1 +KT − τ1 − 1). (34)
Choose the δ = 1/T in Lemma 12, and based by Lemma 13, we obtain that
P ( P ik /∈ Bk ) = P ( P i∗ /∈ Bk ) ≤ 1
15Tt6k .
Then we can obtain, 2Eθ∗ [ KT∑ k=1 Tk ( I{θ∗ /∈Bk} + I{θk /∈Bk} )] ≤ 4 15 ∞∑ k=1 t−6k ≤ 4 15 ∞∑ k=1 k−6 ≤ 1. (35)
Therefore we obtain
2HEθ∗ [ KT∑ k=1 Tk ( I{θ∗ /∈Bk} + I{θk /∈Bk} )] ≤ H. (36)
Therefore, we can obtain the bound for one arm as follows, Eθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s′∈S ( P i∗ (s ′ | zt)− P ik (s′ | zt) ) v (s′, θk) )] ≤ H + 4SNH(T1 +KT − τ1 − 1) + 48HS √ NT log(NT ).
(37)
Next we consider the state transition of all arms. Recall that the states of all arms at time t is st. Because every arm evolves independently, then the transition probability from state st to state st+1 is as follows,
P (st+1 | st, θ∗) = N∏ i=1 P i∗ ( sit+1 | sit ) ,
where P i∗ is the true transition functions of arm i. Based by the Lemma 11 and our assumption that all arms have the same state space S, we can obtain∑
st+1 |P (st+1 | st, θ∗)− P (st+1 | st, θk)| ≤ N∑ i ∥∥P i∗ (sit+1 | sit)− P ik (sit+1 | sit)∥∥1 ≤ N ∥∥P i∗ (sit+1 | sit)− P ik (sit+1 | sit)∥∥1 . (38)
Therefore, we can bound the R03 as follows: R03 ≤ NH + 4SN2H(T1 +KT − τ1 − 1) + 48SNH √ NT log(NT ). (39)
Bounding R13. Based on the Proposition 2, we know that∥∥∥b∗t − b̂t∥∥∥ 1 ≤ L1∥R∗ −Rk∥1 + L2 max s ∥P ∗(s, :)− Pk(s, :)∥2 .
Note that the elements in the true transition matrix P ∗ and the sampled matrix Pk is between the interval (0, 1). Then based on the facts about norm, we know that
max s ∥P ∗(s, :)− Pk(s, :)∥2 ≤ ∥P ∗ − Pk∥1 .
Therefore , we can bound the belief error at any time as follows:∥∥∥b∗t − b̂t∥∥∥ 1 ≤ L1∥R∗ −Rk∥1 + L2∥P ∗ − Pk∥1. (40)
Recall in the confidence for Mk, the error bound is the same for ∥R∗ −Rk∥1 and ∥P ∗ − Pk∥1, and based by the bound in equation 34 and equation 35, we can bound the R13 as follows:
R13 ≤ HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk (L1∥R∗ −Rk∥1 + L2∥P ∗ − Pk∥1) ]
≤ (L1 + L2N)HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk ( 2βik (zt) + 2 ( I{P∗ /∈Bk} + I{Pk /∈Bk} ))] ≤ 48(L1 + L2N)SH √ NT log(NT ) + (L1 + L2N)H
4(L1 + L2N)SNH(T1 +KT − τ1 − 1).
(41)
Bounding R23. Based on equation 34 and equation 35, we can bound R23 as follows,
R23 = S 2HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥R∗(· | s)−Rk(· | s)∥1 ]
≤ S2HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ( 2βik (zt) + 2 ( I{R∗ /∈Bk} + I{Rk /∈Bk} ))] ≤ HS2 + 4S3NH(T1 +KT − τ1 − 1) + 48HS3 √ NT log(NT ).
(42)
Combine the bound in equation 39, equation 41 and equation 42, we bound the term R3 as follows: R3 ≤ 48(L1 + L2N)SH √ NT log(NT ) + 4(L1 + L2N)SNH(T1 +KT − τ1 − 1)
+ (L1 + L2N)H +NH + 4SN 2H(T1 +KT − τ1 − 1) + 48SNH √ NT log(NT )
+HS2 + 4S3NH(T1 +KT − τ1 − 1) + 48HS3 √ NT log(NT )
= 48(L1 + L2N +N + S 2)SH √ NT log(NT ) + (L1 + L2N +N + S 2)H
+ 4(L1 + L2N +N 2 + S2)SNH(T1 +KT − τ1 − 1).
(43)
B.4 BOUND R4
Lemma 9. R4 satisfies the following bound R4 ≤ 48(L1 + L2N +N + S2)Srmax √ NT log(NT ) + (L1 + L2N +N + S 2)rmax
+ 4(L1 + L2N +N + S 2)SArmax(T1 +KT − | 1. What is the focus and contribution of the paper regarding the restless bandit problem with unobserved states?
2. What are the strengths and weaknesses of the proposed Thompson Sampling with an Episodic Explore-Then-Commit (TSEETC) algorithm?
3. How does the paper's approach compare to other relevant works in the field, such as "the restless hidden markov bandit with linear rewards and side information"?
4. How does the paper tackle the challenge of unobserved states, and how effective is this approach?
5. What are the clarity, quality, novelty, and reproducibility of the paper's content?
6. Are there any subleties or insights to be discussed regarding the achieved T regret bound?
7. Can the authors provide discussion and guarantees for the frequentist regret of their algorithm?
8. Is Lemma 2 worth adding in the main body of the paper, or should it remain in the appendix?
9. How does the computational complexity of colored-UCRL2 of Ortner et al. (2012) compare to Thomson based sampling, and what about the computational demand of the oracle used to compute the optimal policy of a POMDP?
10. What is the significance of using an explore-then-commit or episodes in this setting, and how does it impact the result? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors consider a restless bandit problem where the states are unobserved. The authors propose algorithm Thompson Sampling with an Episodic Explore-Then-Commit (TSEETC). In the exploration phase, the algorithm uses a bayesian approach, based on mixtures of Dirichelet priors to update the unkown parameters and beliefs. In the the explotation phase the algorithm simply commits the estimated model and selects actions greedily. This process is repeated for each episode. The authros porivde a bayesian regret guarantee of TSEETC scaling as
T
.
Strengths And Weaknesses
Weaknesses
Assumption 1 seems to be too strong in comparison with a weakly communicating MDP, could the authors motivate why they restrict themselves to this assumption.
(Literature review) I believe the paper: "the restless hidden markov bandit with linear rewards and side information" (see arxiv version https://arxiv.org/pdf/1910.10271.pdf ) is extremely relevant. The author did not cite this work nor discussed it. It seems an instance dependent regret bound of order
log
(
T
)
is provided. How does this result compare with
T
bayesian regret bound provided by the authors here? A thorough comparison is needed here!
The readability of the paper can be improved and a clear explanation clarifying how challenging is the task of tackling unseen states is somewhat illusive in the main first 8 pages of the paper. I think the authors should focus on this aspect a bit more.
Strengths
The setting of restless bandits with unobserved states appear to be understudied to the best of my knowledge.
Although I haven't thoroughly read most of the technical proofs, the results appear sound and technically correct.
Clarity, Quality, Novelty And Reproducibility
It appears that the main challenge tackled in this paper is the fact that the states are unobserved. The authors successfully overcome this challenge via an adequate bayesian approach. However, the technical novelty of how they tackle this challenge is somehwat illusive in the main paper and I believe it deserves more explanation. It would also improve the readability of their proofs if the authors add to section 4.1 and 4.2, theoretical guarantees on the performance of their belief update procedure.
I am struggling to find an insight in achieving
T
? In a sense, the provided result confirms that restless bandits with unobserved states is as easy restless bandits with observed states (since both yield regret of order
T
). But perhaps there are some subltelties to be discussed. Can the authors add some comments on this?
The authors didn't provide any discussion on the frequentist regret. Can the authors explain whether they can obtain similar guarantees for the frequentist regret of their algorithm?
Is Lemma 2 worth adding in the main? shouldn't be in the appendix?
The authors mention that colored-UCRL2 of Ortner et al.(2012) is computationally demanding in comparison with Thomson based sampling. What about the oracle used to compute the optimal policy of a POMDP? can that be also computationally demanding?
The use of an explore-than-commit or episodes is somewhat standard in the literature. Is there something fundamentally important about using this in this setting here? If so please motivate. |
ICLR | Title
ONLINE RESTLESS BANDITS WITH UNOBSERVED STATES
Abstract
We study the online restless bandit problem, where each arm evolves according to a Markov chain independently, and the reward of pulling an arm depends on both the current state of the corresponding Markov chain and the action. The agent (decision maker) does not know the transition kernels and reward functions, and cannot observe the states of arms even after pulling. The goal is to sequentially choose which arms to pull so as to maximize the expected cumulative rewards collected. In this paper, we propose TSEETC, a learning algorithm based on Thompson Sampling with Episodic Explore-Then-Commit. The algorithm proceeds in episodes of increasing length and each episode is divided into exploration and exploitation phases. In the exploration phase in each episode, action-reward samples are collected in a round-robin way and then used to update the posterior as a mixture of Dirichlet distributions. At the beginning of the exploitation phase, TSEETC generates a sample from the posterior distribution as true parameters. It then follows the optimal policy for the sampled model for the rest of the episode. We establish the Bayesian regret bound Õ( √ T ) for TSEETC, where T is the time horizon. This is the first bound that is close to the lower bound of restless bandits, especially in an unobserved state setting. We show through simulations that TSEETC outperforms existing algorithms in regret.
N/A
√ T ) for TSEETC, where T is the time
horizon. This is the first bound that is close to the lower bound of restless bandits, especially in an unobserved state setting. We show through simulations that TSEETC outperforms existing algorithms in regret.
1 INTRODUCTION
The restless multi-armed problem (RMAB) is a general setup to model many sequential decision making problems ranging from wireless communication (Tekin & Liu, 2011; Sheng et al., 2014), sensor/machine maintenance (Ahmad et al., 2009; Akbarzadeh & Mahajan, 2021) and healthcare (Mate et al., 2020; 2021). This problem considers one agent and N arms. Each arm i is modulated by a Markov chain M i with state transition function P i and reward function Ri. At each time, the agent decides which arm to pull. After the pulling, all arms undergo an action-dependent Markovian state transition. The goal is to decide which arm to pull to maximize the expected reward, i.e., E[ ∑T t=1 rt], where rt is the reward at time t and T is the time horizon.
In this paper, we consider the online restless bandit problem with unknown parameters (transition functions and reward functions) and unobserved states. Many works concentrate on learning unknown parameters (Liu et al., 2010; 2011; Ortner et al., 2012; Wang et al., 2020; Xiong et al., 2022a;b) while ignoring the possibility that the states are also unknown. The unobserved states assumption is common in real-world applications, such as cache access (Paria & Sinha, 2021) and recommendation system (Peng et al., 2020). In the cache access problem, the user can only get the perceived delay but cannot know whether the requested content is stored in the cache before or after the access. Moreover, in the recommender system, we do not know the user’s preference for the items. There are also some studies that consider the unobserved states. However, they often assume the parameters are known (Mate et al., 2020; Meshram et al., 2018; Akbarzadeh & Mahajan, 2021) and there is a lack of theoretical result (Peng et al., 2020; Hu et al., 2020). And the existing algorithms (Zhou et al., 2021; Jahromi et al., 2022) with theoretical guarantee do not match the lower regret bound of RMAB (Ortner et al., 2012).
One common way to handle the unknown parameters but with observed states is to use the optimism in the face of uncertainty (OFU) principle (Liu et al., 2010; Ortner et al., 2012; Wang et al., 2020). The regret bound in these works is too weak sometimes, because the baseline they consider, such
as pulling the fixed arms (Liu et al., 2010), is not optimal in RMAB problem. Ortner et al. (2012) derives the lower bound Õ( √ T ) for RMAB problem. However, it is not clear whether there is an efficient computational method to search out the optimistic model in the confidence region (Lakshmanan et al., 2015). Another way to estimate the unknown parameters is Thompson Sampling (TS) method (Jung & Tewari, 2019; Jung et al., 2019; Jahromi et al., 2022; Hong et al., 2022). TS algorithm does not need to solve all instances that lie within the confident sets as OFU-based algorithms (Ouyang et al., 2017). What’s more, empirical studies suggest that TS algorithms outperform OFUbased algorithms in bandit and Markov decision process (MDP) problems (Scott, 2010; Chapelle & Li, 2011; Osband & Van Roy, 2017).
Some studies assume that only the states of pulled arms are observable (Mate et al., 2020; Liu & Zhao, 2010; Wang et al., 2020; Jung & Tewari, 2019). They translate the partially observable Markov decision process (POMDP) problem into a fully observable MDP by regarding the state last observed and the time elapsed as a meta-state (Mate et al., 2020; Jung & Tewari, 2019), which is much simpler due to more observations about pulled arms. Mate et al. (2020), and Liu & Zhao (2010) derive the optimal index policy but they assume the known parameters. Restless-UCB in Wang et al. (2020) achieves the regret bound of Õ(T 2/3), which does not match the lower bound Õ( √ T ) regret, and also restricted to a specific Markov model. There are also some works that consider that the arm’s state is not visible even after pulling (Meshram et al., 2018; Akbarzadeh & Mahajan, 2021; Peng et al., 2020; Hu et al., 2020; Zhou et al., 2021; Yemini et al., 2021) and the classic POMDP setting (Jahromi et al., 2022). However, there are still some challenges unresolved. Firstly, Meshram et al. (2018) and Akbarzadeh & Mahajan (2021) study the RMAB problem with unobserved states but with known parameters. However, the true value of the parameters are often unavailable in practice. Secondly, the works study RMAB from a learning perspective, e.g., Peng et al. (2020); Hu et al. (2020) but there are no regret analysis. Thirdly, existing policies with regret bound Õ(T 2/3) (Zhou et al., 2021; Jahromi et al., 2022) often do not have a regret guarantee that scales as Õ( √ T ), which is the lower bound in RMAB problem (Ortner et al., 2012). Yemini et al. (2021) considers the arms are modulated by two unobserved states and with linear reward. This linear structure is quite a bit of side information that the decision maker can take advantage of for decision making and problem-dependent log(T ) is given.
To the best of our knowledge, there are no provably optimal policies that perform close to the offline optimum and match the lower bound in restless bandit, especially in unobserved states setting. The unobserved states bring much challenges to us. Firstly, we need to control estimation error about states, which itself is not directly observed. Secondly, the error depends on the model parameters in a complex way via Bayesian updating and the parameters are still unknown. Thirdly, since the state is not fully observable, the decision-maker cannot keep track of the number of visits to state-action pairs, a quantity that is crucial in the theoretical analysis. We design a learning algorithm TSEETC to estimate these unknown parameters, and benchmarked on a stronger oracle, we show that our algorithm achieves a tighter regret bound. In summary, we make the following contributions:
Problem formulation. We consider the online restless bandit problems with unobserved states and unknown parameters. Compared with Jahromi et al. (2022), our reward functions are unknown.
Algorithmic design. We propose TSEETC, a learning algorithm based on Thompson Sampling with Episodic Explore-Then-Commit. The whole learning horizon is divided into episodes of increasing length. Each episode is split into exploration and exploitation phases. In the exploration phase, to estimate the unknown parameters, we update the posterior distributions about unknown parameters as a mixture of Dirichlet distributions. For the unobserved states, we use the belief state to encode the historical information. In the exploitation phases, we sample the parameters from the posterior distribution and derive an optimal policy based on the sampled parameter. What’s more, we design the determined episode length in an increasing manner to control the total episode number, which is crucial to bound the regret caused by exploration.
Regret analysis. We consider a stronger oracle which solves POMDP based on our belief state. And we define the pseudo-count to store the state-action pairs. Under a Bayesian framework, we show that the expected regret of TSEETC accumulated up to time T is bounded by Õ( √ T ) , where
Õ hides logarithmic factors. This bound improves the existing results (Zhou et al., 2021; Jahromi et al., 2022).
Experiment results. We conduct the proof-of-concept experiments, and compare our policy with existing baseline algorithms. Our results show that TSEETC outperforms existing algorithms and achieve a near-optimal regret bound.
2 RELATED WORK
We review the related works in two main domains: learning algorithm for unknown parameters, and methods to identify unknown states.
Unknown parameters. Since the system parameters are unknown in advance, it is essential to study RMAB problems from a learning perspective. Generally speaking, these works can be divided into two categories: OFU (Ortner et al., 2012; Wang et al., 2020; Xiong et al., 2022a; Zhou et al., 2021; Xiong et al., 2022b) or TS based (Jung et al., 2019; Jung & Tewari, 2019; Jahromi et al., 2022; Hong et al., 2022). The algorithms based on OFU often construct confidence sets for the system parameters at each time, find the optimistic estimator that is associated with the maximum reward, and then select an action based on the optimistic estimator. However, these methods may not perform close to the offline optimum because the baseline policy they consider, such as pulling only one arm, is often a heuristic policy and not optimal. In this case, the regret bound O(log T ) (Liu et al., 2010) is less meaningful. Apart from these works, posterior sampling (Jung & Tewari, 2019; Jung et al., 2019) were used to solve this problem. A TS algorithm generally samples a set of MDP parameters randomly from the posterior distribution, then actions are selected based on the sampled model. Jung & Tewari (2019) and Jung et al. (2019) provide theoretical guarantee Õ( √ T ) in the Bayesian setting. TS algorithms are confirmed to outperform optimistic algorithms in bandit and MDP problems (Scott, 2010; Chapelle & Li, 2011; Osband & Van Roy, 2017).
Unknown states. There are some works that consider the states of the pulled arm are observed (Mate et al., 2020; Liu & Zhao, 2010; Wang et al., 2020; Jung & Tewari, 2019). Mate et al. (2020) and Liu & Zhao (2010) assumes the unobserved states but with known parameters. Wang et al. (2020) constructs an offline instance and give the regret bound Õ(T 2/3). Jung & Tewari (2019) considers the episodic RMAB problems and the regret bound Õ( √ T ) is guaranteed in the Bayesian setting. Some studies assume that the states are unobserved even after pulling. Akbarzadeh & Mahajan (2021) and Meshram et al. (2018) consider the RMAB problem with unknown states but known system parameters. And there is no regret guarantee. Peng et al. (2020) and Hu et al. (2020) consider the unknown parameters but there are also no any theoretical results. The most similar to our work is Zhou et al. (2021) and Jahromi et al. (2022). Zhou et al. (2021) considers that all arms are modulated by a common unobserved Markov Chain. They proposed the estimation method based on spectral method (Anandkumar et al., 2012) and learning algorithm based on upper confidence bound (UCB) strategy (Auer et al., 2002). They also give the regret bound Õ(T 2/3) and there is a gap between the lower bound Õ( √ T ) (Ortner et al., 2012). Jahromi et al. (2022) considers the POMDP setting and propose the pseudo counts to store the state-action pairs. Their learning algorithm is based on Ouyang et al. (2017) and the regret bound is also Õ(T 2/3). And their algorithm is not programmable due to the pseudo counts is conditioned on the true counts which is uncountable.
3 PROBLEM SETTING
Consider a restless bandit problem with one agent and N arms. Each arm i ∈ [N ] := {1, 2, . . . , N} is associated with an independent discrete–time Markov chainMi = (Si, P i), where Si is the state space and P i ∈ RSi×Si the transition functions. Let sit denote the state of arm i at time t and st = (s 1 t , s 2 t , . . . , s N t ) the state of all arms. Each arm i is also associated with a reward functions Ri ∈ RSi×R, where Ri (r | s) is the probability that the agent receives a reward r ∈ R when he pulls arm i in state s. We assume the state spaces Si and the reward set R are finite and known to the agent. The parameters P i and Ri, i ∈ [N ] are unknown, and the state st is also unobserved to the agent. For the sake of notational simplicity, we assume that all arms have the same state spaces S with size S. Our result can be generalized in a straightforward way to allow different state spaces. The whole game is divided into T time steps. The initial state si1 for each arm i ∈ [N ] is drawn from a distribution hi independently, which we assume to be known to the agent. At each time t, the agent chooses one arm at ∈ [N ] to pull and receives a reward rt ∈ R with probability Rat(rt | satt ). Note
that only the pulled arm has the reward feedback. His decision on which arm at to pull is based on the observed history Ht = [a1, r1, a2, r2 · · · , at−1, rt−1]. Note that the states of the arms are never observable, even after pulling. Each arm i makes a state transition independently according to the associated P i, whether it is pulled or not. This process continues until the end of the game. The goal of the agent is to maximize the total expected reward.
We use θ to denote the unknown P i and Ri for i ∈ [N ] collectively. Since the true states are unobservable, the agent maintains a belief state bit = [b i t(s, θ), s ∈ Si] ∈ ∆Si for each arm i, where
bit(s, θ) := P ( sit = s | Ht, θ ) ,
and ∆Si := { b ∈ RSi+ : ∑ s∈Si b(s) = 1 } is the probability simplex in RSi . Note that bit(s, θ) depends on the unknown model parameter θ, which itself has to be learned by the agent. We aggregate all arms as a whole Markov chainM and denote its transition matrix and reward function as P and R, respectively. For a given θ, the overall belief state bt = (b1t , b 2 t , · · · , bNt ) is a sufficient statistic for Ht−1 (Smallwood & Sondik, 1973), so the agent can base his decision at time t on bt only. Let ∆b := ∆S1 × · · · ×∆SN . A deterministic stationary policy π : ∆b → [N ] maps a belief state to an action. The long-term average reward of a policy π is defined as
Jπ(h, θ) := lim sup T→∞
1 T E [ T∑ t=1 rt ∣∣∣ h, θ] . (1) We use J(h, θ) = supπ J
π(h, θ) to denote the optimal long-term average reward. We assume J(h, θ) is independent of the initial distribution h as in Jahromi et al. (2022) and denoted it by J(θ). We make the following assumption. Assumption 1. The smallest element ϵ1 in the transition functions P i, i ∈ N is bigger than zero. Assumption 2. The smallest element ϵ2 in the reward functions Ri, i ∈ N is bigger than zero.
Assumption 1 and Assumption 2 are strong in general, but they help us bound the error of belief estimation (De Castro et al., 2017). Assumption 1 also makes the MDP weakly communicating (Bertsekas et al., 2011). For weakly communicating MDP, it is known that there exists a bounded function v(·, θ) : ∆b → R such that for all b ∈ ∆b (Bertsekas et al., 2011),
J(θ) + v(b, θ) = max a
{ r(b, a) +
∑ r P (r | b, a, θ)v (b′, θ)
} , (2)
where v is the relative value function, r(b, a) = ∑ s ∑ r b a(s, θ)Ra(r | s)r is the expected reward, b′ is the updated belief after obtaining the reward r, and P (r | b, a, θ) is the probability of observing r in the next step, conditioned on the current belief b and action a. The corresponding optimal policy is the maximizer of the right part in equation 2. Since the value function v(, θ) is finite, we can bound the span function sp(θ) := maxb v(b, θ) −minb v(b, θ) as Zhou et al. (2021). We show the details about this bound in Proposition 1 and denote the bound as H .
We consider the Bayesian regret. The parameters θ∗ is randomly generated from a known prior distribution Q at the beginning and then fixed but unknown to the agent. We measure the efficiency of a policy π by its regret, defined as the expected gap between the cumulative reward of an offline oracle and that of π, where the oracle is the optimal policy with the full knowledge of θ∗, but unknown states. The offline oracle is similar to Zhou et al. (2021), which is stronger than those considered in Azizzadenesheli et al. (2016) and Fiez et al. (2018). We focus on the Bayesian regret of policy π (Ouyang et al., 2017; Jung & Tewari, 2019) as follows,
RT := Eθ∗∼Q [ T∑ t=1 (J(θ∗)− rt) ] . (3)
The above expectation is with respect to the prior distribution about θ∗, the randomness in state transitions and the random reward.
4 THE TSEETC ALGORITHM
In section 4.1, we define the belief state and show how to update it with new observation. In section 4.2, we show how to update the posterior distributions under unknown states. In section 4.3, we show the details about our learning algorithm TSEETC.
4.1 BELIEF ENCODER FOR UNOBSERVED STATE
Here we focus on the belief update for arm i with true parameters θ∗. At time t, the belief for arm i in state s is bit(s, θ
∗). Then after the pulling of arm i, we obtain the observation rt. The belief bit(s ′, θ∗) can be update as follows:
bit+1(s ′, θ∗) =
∑ s b i t(s, θ
∗)Ri∗ (rt | s)P i∗(s′ | s)∑ s b i t(s, θ ∗)Ri∗ (rt | s) , (4)
where the P i∗(s ′ | s) is the probability of transitioning from state s at time t to state s′ andRi∗ (rt | s) is the probability of obtain reward rt under state s.
If the arm i is not pulled, we update its belief as follows:
bit+1(s ′, θ∗) = ∑ s bit(s, θ ∗)P i∗(s ′ | s). (5)
Then at each time, we can aggregate the belief of all arms as bt. Based on equation 2 , we can derive the optimal action at for current belief bt.
4.2 MIXTURE OF DIRICHLET DISTRIBUTION
In this section, we estimate the unknown P i and Ri based on Dirichlet distribution. The Dirichlet distribution is parameterized by a count vector, ϕ = (ϕ1, . . . , ϕk), where ϕi ≥ 0, such that the density of probability distribution is defined as f(p | ϕ) ∝ ∏k i=1 p ϕi−1 i (Ghavamzadeh et al., 2015).
Since the true states are unobserved, all state sequences should be considered, with some weight proportional to the likelihood of each state sequence (Ross et al., 2011). Denote the reward history collected from time t1 till t2 for arm i as rit1:t2 and similarly the states history is denoted as s i t1:t2 . And the belief state history is denoted as bit1:t2 . Then with these history information, the posterior distribution gt(P i) and gt(Ri) at time t can be updated as in Lemma 1. Lemma 1. Under the unobserved state setting and assuming transition function P i with prior g0 ( P i ) = f(P
i−ϵ11 1−ϵ1 | ϕ
i) , reward function Ri with prior g0 ( Ri ) = f(R
i−ϵ21 1−ϵ2 | ψ i), with the information ri0:t and b i 0:t , then the posterior distribution are as follows:
gt ( P i ) ∝ ∑ s̄it∈Sti g0 ( P i ) w(si0:t) ∏ s,s′ ( P i(s′ | s)− ϵ1 1− ϵ1 )N i s,s′(s̄ i t)+ϕ i s,s′−1, (6)
gt ( Ri ) ∝ ∑ s̄it∈Sti g0 ( Ri ) w(si0:t) ∏ s,r ( Ri(r | s)− ϵ2 1− ϵ2 )N i s,r(s̄ i t)+ψ i s,r−1. (7)
where w(si0:t) is the likelihood of state sequence s i 0:t and 1 is the vector with one in each position.
The element 1 can be different lengths in correspondence with the dimension of P and R. This procedure is summarized in Algorithm 1.
Algorithm 1 Posterior Update for Ri(s, ·) and P i(s, ·) 1: Input: the history length τ1, the state space Si, the belief history bi0:τ1 , the reward history r i 0:τ1 ,
the initial parameters ϕis,s′ , ψ i s,r, for s, s ′ ∈ Si, r ∈ R, 2: generate Sτ1i possible state sequences 3: calculate the weight w(j) = ∏τ1 t=1 b i t(s, θ), j ∈ S τ1 i 4: for j in 1, . . . ,Sτ1i do 5: count the occurence times of event (s, s′) and (s, r) as N is,s′ , N i s,r in sequence j 6: update ϕis,s′ ← ϕis,s′ +N is,s′ , ψis,r ← ψis,r +N is,r 7: aggregate the ϕis,s′ as ϕ(j), ψ i s,r as ψ(j) for all s, s
′ ∈ Si, r ∈ R 8: end for 9: update the mixture Dirichlet distribution gτ1(P i) ∝ ∑Sτ1i j=1 w(j)f( P i−ϵ11 1−ϵ1 | ϕ(j)),
gτ1(R i) ∝ ∑Sτ1i j=1 w(j)f( Ri−ϵ21 1−ϵ2 | ψ(j))
With Algorithm 1, we can update the posterior distribution about the unknown parameters and sample from the posterior distribution as true parameters. The belief estimation error can be bounded by the distance between the sampled parameters and the true values (Proposition 2 ). The theoretical guarantee about estimation errors about unknown parameters is provided in Lemma 2.
4.3 OUR ALGORITHM
Our algorithm, TSEETC, operates in episodes with different lengths. Each episode is split into exploration phase and exploitation phase. Denote the episode number is KT and the first time in each episode is denoted as tk. We use Tk to denote the length of episode k and it can be determined as: Tk = T1 + k − 1, where T1 = ⌈√ T+1 2 ⌉ . The length of exploration phase in each episode is fixed as τ1 which satisfies τ1KT = O( √ T ) and τ1 ≤ T1+KT−12 . With these notations, our whole algorithm is shown below.
Algorithm 2 Thompson Sampling with Episodic Explore-Then-Commit 1: Input: prior g0(P ),g0(R), initial belief b0, exploration length τ1, the first episode length T1 2: for episode k = 1, 2, . . . , do 3: start the first time of episode k, tk := t 4: generate R(tk) ∼ gtk−1+τ1(R) and P (tk) ∼ gtk−1+τ1(P ) 5: for t = tk, tk + 1, ..., tk + τ1 do 6: pull the arm i for τ1/N times in a round robin way 7: receive the reward rt 8: update the belief bit using R(tk), P (tk) based on equation 4 9: update the belief bjt , j ∈ N \ {i} using P (tk) based on equation 5 10: end for 11: for i = 1, 2, . . . , N do 12: input the obtained rt1:t1+τ1 , ..., rtk:tk+τ1 , bt1:t1+τ1 , ..., btk:tk+τ1 to Algorithm 1 to update the posterior distribution gtk+τ1(P ), gtk+τ1(R) 13: end for 14: generate R(tk + τ1) ∼ gtk+τ1(P ), P (tk + τ1) ∼ gtk+τ1(R) 15: for i in 0, 1, . . . , N do 16: re-update the belief bit from time 0 to tk + τ1 based on R(tk + τ1) and P (tk + τ1) 17: end for 18: compute π∗k(·) = Oracle (·, R(tk + τ1), P (tk + τ1)) 19: for t = tk + τ1 + 1, · · · , tk+1 − 1 do 20: apply action at = π∗k (bt) 21: observe new reward rt+1 22: update the belief bt of all arms based equation 4, equation 5 23: end for 24: end for
In episode k, for the exploration phase, we first sampled the θtk from the distribution gtk−1+τ1(P ) and gtk−1+τ1(R). We pull each arm for τ1/N times in a round robin way. For the pulled arm, we update its belief based on equation 4 using θtk . For the arms that are not pulled, we update its belief based on equation 5 using θtk . The reward and belief history of each arm are input into Algorithm 1 to update the posterior distribution after the exploration phase. Then we sample the new θtk+τ1 from the posterior distribution, and re-calibrate the belief bt based on the most recent estimated θtk+τ1 . Next we enter into the exploitation phase . Firstly we derive the optimal policy πk for the sampled parameter θtk+τ1 . Then we use policy πk for the rest of the episode k.
We control the increasing of episode length in a deterministic manner. Specially, the length for episode k is just one more than the last episode k. In such a deterministic increasing manner, the episode number KT is bounded by O( √ T ) as in Lemma 10. Then the regret caused by the
exploration phases can be bound by O( √ T ), which is an crucial part in Theorem 1.
In TSEETC, for the unknown states, we propose the belief state to estimate the true states. What’s more, under the unobserved state setting, we consider all possible state transitions and update the
posterior distribution of unknown parameters as mixture of each combined distribution, in which each occurence is summed with different weight. Remark 1. We use an Oracle to derive the optimal policy for the sampled parameters in Algorithm 2. The Oracle can be the Bellman equation for POMDP as we introduced in equation 2, or the approximation methods (Pineau et al., 2003; Silver & Veness, 2010) , etc. The approximation error is discussed in Remark 3.
5 PERFORMANCE ANALYSIS
In section 5.1, we show our theoretical results and some discussions. In section 5.2, we provide a proof sketch and the detailed proof is in Appendix B.
5.1 REGRET BOUND AND DISCUSSIONS
Theorem 1. Suppose Assumptions 1,2 hold and the Oracle returns the optimal policy in each episode. The Bayesian regret of our algorithm satisfies
RT ≤ 48C1C2S √ NT log(NT ) + (τ1∆R+H + 4C1C2SN) √ T + C1C2,
where C1 = L1 + L2N +N2 + S2, C2 = rmax +H are constants independent with time horizon T , L1 =
4(1−ϵ1)2 Nϵ21ϵ2 , L2 = 4(1−ϵ1)2 ϵ31 , ϵ1 and ϵ2 are the minimum elements of the functions P ∗ and R∗,
respectively. τ1 is the fixed exploration length in each episode, ∆R is the biggest gap of the reward obtained at each two different time, H is the bounded span, rmax is the maximum reward obtain each time, N is the number of arms and S is the state size for each arm. Remark 2. The Theorem 1 shows that the regret of TSEETC is upper bound by Õ( √ T ). This is the first bound that matches the lower bound in restless bandit problem (Ortner et al., 2012) in such unobserved state setting. Although TSEETC looks similar to explore-then-commit (Lattimore & Szepesvári, 2020), a key novelty of TSEETC lies in using the approach of posterior sampling to update the posterior distribution of unknown parameters as the mixture of each combined distribution. Our algorithm balances exploration and exploitation in a deterministic-episode manner and ensures the episode length grows at a linear rate, which guarantees that the total episode number is bounded by O( √ T ). Therefore the total regret caused by exploration is well controlled by O( √ T ) and this is better than the bound O(T 2/3) in Zhou et al. (2021). What’s more, in the exploitation phase, our regret bound Õ( √ T ) is also better than Õ(T 2/3) (Zhou et al., 2021). This shows our posterior sampling based method is superior to UCB based solution (Osband & Van Roy, 2017). In Jahromi et al. (2022), their pseudo count of state-action pair is always smaller than the true counts with some probability at any time. However, in our algorithm, the sampled parameter is more concentrated on true values with the posterior update. Therefore, our pseudo count (defined in equation 13) based on belief approximates the true counts more closely, which helps us obtain a tighter bound.
5.2 PROOF SKETCH
In our algorithm, the total regret can be decomposed as follows:
RT = Eθ∗ [ kT∑ k=1 tk+τ1∑ tk J(θ∗)− rt ] ︸ ︷︷ ︸
Regret (A)
+Eθ∗ [ kT∑ k=1 tk+1−1∑ tk+τ1+1 J(θ∗)− rt ] ︸ ︷︷ ︸
Regret (B)
. (8)
Bounding Regret (A). The Regret (A) is the regret caused in the exploration phase of each episode. This term can be simply bounded as follows:
Regret (A) ≤ Eθ∗ [ kT∑ k=1 τ1∆R ] ≤ τ1∆RkT (9)
where ∆R = rmax − rmin is the biggest gap of the reward received at each two different times. The regret in equation 9 is related with the episode number kT , which can be bounded asO( √ T ) in Lemma 10.
Bounding Regret (B). Next we bound Regret(B) in the exploitation phase. Define b̂t is the belief updated with parameter θk and b∗t represents the belief with θ
∗. During episode k, based on equation 2 for the sampled parameter θk and that at = π∗(b̂t), we can write:
J (θk) + v(b̂t, θk) = r(b̂t, at) + ∑ r P (r | b̂t, at, θk)v(b′, θk). (10)
With this equation, we proceed by decomposing the regret as:
Regret(B) = R1 +R2 +R3 +R4 (11)
where each term is defined as follows:
R1 = Eθ∗ kT∑ k=1 [(Tk − τ1 − 1) (J(θ∗)− J(θk))] ,
R2 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( v(b̂t+1, θk)− v(b̂t, θk) )] ,
R3 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 (∑ r P [ r | b̂t, at, θk ] v(b′, θk)− v(b̂t+1, θk) )] ,
R4 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( r(b̂t, at)− r(b∗t , at) )] .
Bounding R1. One key property of Posterior Sampling algorithms is that for given the historyHtk , the true parameter θ∗ and sampled θk are identically distributed at the time tk as stated in Lemma 13. Due to the length Tk determined and independent with θk, then R1 is zero thanks to this key property.
Bounding R2. The regret R2 is the telescopic sum of value function and can be bounded as R2 ≤ HKT . It solely depends on the episode number and the upper boundH of span function. As a result, R2 reduce to a finite bound over the number of episodes kT , which can be bounded in Lemma 10.
BoundingR3 andR4. The regret termsR3 andR4 is related with estimation error about θ. Thus we should bound the parameters’ error especially in our unobserved state setting. Recall the definition of ϕ, ψ, we can define the posterior mean of P̂ i(s′ | s) and R̂i(r | s) for arm i at time t as follows:
P̂ i(s′ | s)(t) = ϵ1 + (1− ϵ1)ϕis,s′(t) Sϵ1 + (1− ϵ1) ∥∥ϕis,·(t)∥∥1 , R̂i(r | s)(t) = ϵ2 + (1− ϵ2)ψis,r(t) Sϵ2 + (1− ϵ2) ∥∥ψis,·(t)∥∥1 . (12) We also define the pesudo count of the state-action pair (s, a) before the episode k as
N itk(s, a) = ∥∥ψis,·(tk)∥∥1 − ∥∥ψis,·(0)∥∥1 (13)
where ψis,·(tk) represents the count of state-action z = (s, a) pair before the episode k. Let Mik be the set of plausible MDPs in episode k with reward function R (r | z) and transition function P (s′ | z) satisfying,∑
s′∈S ∣∣∣P (s′ | z)− P̂ ik (s′ | z)∣∣∣ ≤ βk(z), ∑ r∈R ∣∣∣R (r | z)− R̂ik (r | z)∣∣∣ ≤ βk(z), (14) where βik(s, a) := √ 14S log(2NtkT )
max{1,Nitk (s,a)} is chosen conservatively (Auer et al., 2008) so thatMik con-
tains both P i∗ and P i k, R i ∗ and R i k with high probability. P i ∗ and R i ∗ are the true parameters as we defined in section 4.1. Specially, for the unobserved state setting, the belief error under different parameters is upper bounded by the gap between the estimators as in Proposition 2. Then the core of the proofs lies in deriving a high-probability confidence set with our pesudo counts and show that the estimated error accumulated to T for each arm is bounded by √ T . Then with the error bound for each arm, we can derive the final error bound about the MDP aggregated by all arms as stated in Lemma 2 .
Lemma 2. (estimation errors) The total estimation error about transition functions accumulated by all exploitation phases satisfies the following bound
Eθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥P ∗ − Pk∥1 ] ≤ 48SN √ NT log(NT ) + 4SN2 √ T +N. (15)
The Lemma 2 shows that the accumulated error is bounded by O( √ T ), which is crucial to obtain the final bound as the observed-states setting (Ortner et al., 2012; Jung & Tewari, 2019). With C1 = L1 + L2N + N
2 + S2, We show the final bound about R3, R4 and the detailed proof in Appendix B.3,B.4. Lemma 3. R3 satisfies the following bound
R3 ≤ 48C1SH √ NT logNT + 4C1SNH √ T + C1H.
Lemma 4. R4 satisfies the following bound R4 ≤ 48C1Srmax √ NT log(NT ) + 4C1SNrmax √ T + C1rmax.
6 NUMERICAL EXPERIMENTS
In this section, we present proof-of-concept experiments and approximately implement TSEETC . We consider two arms and there are two hidden states for each arm. We pull just one arm each time. The learning horizon T = 50000, and each algorithm runs 100 iterations. The transition functions and reward functions for all arms are the same. We initialize the algorithm with uninformed Dirichlet prior on the unknown parameters. We compare our algorithm with simple heuristics ϵ-greedy (Lattimore & Szepesvári, 2020) (ϵ = 0.01), and Sliding-Window UCB (Garivier & Moulines, 2011) with specified window size, RUCB (Liu et al., 2010), Q-learning (Hu et al., 2020) and SEEU (Zhou et al., 2021). The results are shown in Figure 1. We can find that TSEETC has the minimum regret among these algorithms.
In Figure 2, we plot the cumulative regret versus T of the six algorithms in log-log scale. We observe that the slopes of all algorithms except for our TSEETC and SEEU are close to one, suggesting that they incur linear regrets. What is more, the slope of TSEETC is close to 0.5, which is better than SEEU. This is consistent with our theoretical result.
7 CONCLUSION
In this paper, we consider the restless bandit with unknown states and unknown dynamics. We propose the TSEETC algorithm to estimate these unknown parameters and derive the optimal policy. We also establish the Bayesian regret of our algorithm as Õ( √ T ) which is the first bound that matches the lower bound especially in restless bandit problems with unobserved states . Numerical results validate that the TSEETC algorithm outperforms other learning algorithms in regret. A related open question is whether our method can be applied to the setting where the transition functions are action dependent. We leave it for future research.
A TABLE OF NOTATIONS
Notation Description T The length of horizon KT The episode number of time T Tk The episode length of episode k τ1 The fixed exploration length in each episode P i The transition functions for arm i Ri The reward function for arm i Pk The sampled transition function for aggregated MDP Rk The sampled reward function for aggregated MDP rt The reward obtained at time t
bit(s, θ) The belief state for being in state s at time t for arm i with parameter θ b̂t The belief of all arms at time t with parameter θk b∗t The belief of all arms at time t with parameter θ ∗
at The action at time t J(θk) The optimal long term average reward with parameter θk rmax The maximum reward obtained each time rmin The minimum reward obtained each time ∆R The biggest gap of the obtained reward
B PROOF OF THEOREM 1
Recall that our goal is to minimize the regret : RT := Eθ∗ [ T∑ t=1 (J(θ∗)− rt) ] . (16)
rt depends on the state st and at. Thus rt can be written as r(st, at). Due to Eθ∗ [r (st, at) | Ht−1] = r(b∗t , at) for any t, we have,
RT := Eθ∗ [ T∑ t=1 (J(θ∗)− r(b∗t , at)) ] . (17)
In our algorithm, each episode is split into the exploration and exploitation phase then we can rewrite the regret as:
RT = Eθ∗ [ kT∑ k=1 tk+τ1∑ tk (J(θ∗)− r (b∗t , at)) + kT∑ k=1 tk+1−1∑ tk+τ1+1 (J(θ∗)− r (b∗t , at)) ] , (18)
where τ1 is the exploration length for each episode. τ1 is a constant. tk is the start time of episode k. Define the first part as Regret (A) which is caused by the exploration operations. The another part Regret (B) is as follows.
Regret (A) = Eθ∗ [ kT∑ k=1 tk+τ1∑ tk (J(θ∗)− r (b∗t , at)) ] ,
Regret (B) = Eθ∗ [ kT∑ k=1 tk+1−1∑ tk+τ1+1 (J(θ∗)− r (b∗t , at)) ] .
Recall that the reward set isR and we define the maximum reward gap inR as ∆R = rmax−rmin. Then we get: J(θ∗)− r (b∗t , at) ≤ ∆R. Then Regret (A) can be simply upper bounded as follows:
Regret (A) ≤ Eθ∗ [ kT∑ k=1 τ1∆R ] ≤ τ1∆RkT .
Regret (A) is related with the episode number kT obviously, which is bounded in Lemma 10. Next we should bound the term Regret (B).
During the episode k, based on equation 2, we get: J (θk) + v(b̂t, θk) = r(b̂t, at) + ∑ r P (r | b̂t, at, θk)v (b′, θk) , (19)
where J (θk) is the optimal long-term average reward when the system parameter is θk, b̂t is the belief at time t updated with parameter θk, r(b̂t, at) is the expected reward we can get when the action at is taken for the current belief b̂t, b′ is the updated belief based on equation 4 with parameter θk when the reward r is received.
Using this equation, we proceed by decomposing the regret as:
Regret(B) = R1 +R2 +R3 +R4, (20)
where
R1 = Eθ∗ kT∑ k=1 [(Tk − τ1 − 1) (J(θ∗)− J(θk))] ,
R2 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( v(b̂t+1, θk)− v(b̂t, θk) )] ,
R3 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 (∑ r P (r | b̂t, at, θk)v(b′, θk)− v(b̂t+1, θk) )] ,
R4 = Eθ∗ kT∑ k=1 [ tk+1−1∑ tk+τ1+1 ( r(b̂t, at)− r(b∗t , at) )] .
Next we bound the four parts one by one.
B.1 BOUND R1
Lemma 5. R1 satisfies that R1 = 0.
Proof. Recall that:
R1 = Eθ∗ kT∑ k=1 [(Tk − τ1 − 1) (J(θ∗)− J(θk)] .
For each episode, Tk is determined and is independent with θk. Based on Lemma 13, we know that,
Eθ∗ [J(θ∗)] = Eθ∗ [J(θk)].
therefore, the part R1 is 0.
B.2 BOUND R2
Lemma 6. R2 satisfies the following bound
R2 ≤ HKT , where KT is the total number of episodes until time T .
Proof. Recall that R2 is the telescoping sum of value function at time t+ 1 and t.
R2 = Eθ∗ kT∑ k=1
[ tk+1−1∑
t=tk+τ1+1
[ v(b̂t+1, θk)− v(b̂t, θk) ]] . (21)
We consider the whole sum in episode k, then the R2 can be rewrite as:
R2 = Eθ∗ kT∑ k=1 [ v(b̂tk+1 , θk)− v(b̂tk+τ1+1, θk) ] .
Due to the span of v(b, θ) is bounded by H as in proposition 1 , then we can obtain the final bound,
R2 ≤ HKT .
B.3 BOUND R3
In this section, we first rewrite the R3 in section B.3.1. In section B.3.2, we show the details about how to bound R3.
B.3.1 REWRITE R3
Lemma 7. (Rewrite R3 ) The regret R3 can be bounded as follows: R3 ≤ HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||P ∗ − Pk||1 ] +HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||b∗t − b̂t||1 ]
+ S2HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥R∗ −Rk∥1 ] ,
where Pk is the sampled transition functions in episode k, Rk is the sampled reward functions in episode k, b∗t is the belief at time t updated with true P
∗ and R∗, b̂t is the belief at time t updated with sampled Pk, Rk.
Proof. The most part is similar to Jahromi et al. (2022), except that we should handle the unknown reward functions.
Recall that R3 = Eθ∗ ∑kT k=1 [∑tk+1−1 t=tk+τ1+1 (∑ r P (r | b̂t, at, θk)v (b′, θk)− v(b̂t+1, θk) )] .
Recall that Ht is the history of actions and observations prior to action at. Conditioned on Ht, θ∗ and θk, the only random variable in b̂t+1 is rt+1, then we can get,
Eθ∗ [ v(b̂t+1, θk) | Ht, θk ] = ∑ r∈R v (b′, θk)P (r | b∗t , at, θ∗), (22)
where P (r | b∗t , at, θ∗) is the probability of getting reward r given b∗t , at, θ∗. By the law of probability, P (r | b∗t , at, θ∗) can be written as follows,
P (r | b∗t , at, θ∗) = ∑ s′ R∗ (r | s′)P (st+1 = s′ | Ht, θ∗)
= ∑ s′ R∗ (r | s′) ∑ s P ∗ (st+1 = s ′ | st = s,Ht, at, θ∗)P (st = s | Ht, θ∗)
= ∑ s ∑ s′ b∗t (s)P ∗ (s′ | s)R∗ (r | s′) ,
(23) where P ∗ is the transition functions for the MDP aggregated by all arms, R∗ is the reward function for the aggregated MDP. Therefore, we can rewrite the R3 as follows,
R3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r∈R (P (r | b̂t, at, θk)− P (r | b∗t , at, θ∗)v (b′, θk) )] .
Based on equation 23, we get R3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)R ∗ (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )]
= Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )] + Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )] − Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)R ∗ (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )] .
(24)
where Rk is the sampled reward function for aggregated MDP, Pk is the sampled transition function for aggregated MDP.
Define R′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) [∑ s b̂t(s)Pk(s ′ | s)− ∑ s b∗t (s)P ∗(s′ | s) ])] ,
R′′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk) [Rk (r | s′)−R∗ (r | s′)] ∑ s b∗t (s)P ∗(s′ | s) )] .
Bounding R′3. The part R′3 can be bounded as Jahromi et al. (2022). R′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) [∑ s b̂t(s)Pk(s ′ | s)− ∑ s b∗t (s)P ∗(s′ | s)
])] = R′3(0) +R ′ 3(1)
where R′3(0) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)Pk(s ′ | s) )]
R′3(1) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)P ∗(s′ | s) )]
For R′3(0), because ∑ r Rk (r | s′) = 1, ∑ s′ Pk(s ′ | s) = 1,v (b′, θk) ≤ H , we have
R′3(0) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b̂t(s)Pk(s ′ | s) )]
− Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s b∗t (s)Pk(s ′ | s) )]
= Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s (b̂t(s)− b∗t (s)Pk(s′ | s) )]
≤ Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) ∑ s |b̂t(s)− b∗t (s)|Pk(s′ | s) )]
≤ HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s |b̂t(s)− b∗t (s)| )]
= HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∥∥∥b̂t(s)− b∗t (s)∥∥∥ 1 )] ,
where the first inequality is due to b̂t(s) − b∗t (s) ≤ |b̂t(s) − b∗t (s)| and the second inequality is because ∑ r Rk (r | s′) = 1, ∑ s′ Pk(s
′ | s) = 1, v (b′, θk) ≤ H . For the first term inR′3(1) , note that conditioned onHt, θ∗, the distribution of st is b∗t . Furthermore, at is measurable with respect to the sigma algebra generated byHt, θk since at = π∗(b̂t, θk). Thus, we have
Eθ∗ [ v (b′, θk)
∑ s P ∗ (s′ | s) b∗(s) | Ht, θk
] = v (b′, θk)Eθ∗ [P ∗ (s′ | s) | Ht, θk] . (25)
Eθ∗ [ v (b′, θk)
∑ s Pk (s ′ | s) b∗(s) | Ht, θk
] = v (b′, θk)Eθ∗ [Pk (s′ | s) | Ht, θk] . (26)
Substitute equation 25, equation 26 into R′3(1), we have
R′3(1) = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) (Pk(s′ | s)− P ∗(s′ | s)) )]
≤ Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk)Rk (r | s′) |Pk(s′ | s)− P ∗(s′ | s)| )]
≤ HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s′ |Pk(s′ | s)− P ∗(s′ | s)| )]
≤ HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∥Pk − P ∗∥1) ] ,
where the first inequality is because Pk(s′ | s)− P ∗(s′ | s) ≤ |Pk(s′ | s)− P ∗(s′ | s)|, the second inequality is due to v (b′, θk) ≤ H and ∑ r Rk (r | s′) = 1.
Therefore we obtain the final results,
R′3 ≤ HE [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||P ∗ − Pk||1 ] +HE [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||b∗t − b̂t||1 ] .
Bounding R′′3 . For part R′′3 , note that for any fixed s′, ∑ s b ∗ t (s)P
∗(s′ | s) ≤ S, therefore we can bound R′′3 as follows,
R′′3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r ∑ s′ v (b′, θk) [Rk (r | s′)−R∗ (r | s′)] ∑ s b∗t (s)P ∗(s′ | s) )]
≤ SHEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s′ ∑ r [Rk (r | s′)−R∗ (r | s′)] )]
≤ SHEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 S ∥Rk −R∗∥1 ]
≤ S2HEθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥Rk −R∗∥1 ] ,
(27) where the first inequality is due to v (b′, θk) ≤ H and ∑ s b ∗ t (s)P
∗(s′ | s) ≤ S , the second inequality is due to for any fixed s′, ∑ r [Rk (r | s′)−R∗ (r | s′)] ≤ ∥Rk −R∗∥1.
B.3.2 BOUND R3
Lemma 8. R3 satisfies the following bound R3 ≤ 48(L1 + L2N +N + S2)SH √ NT log(NT ) + (L1 + L2N +N + S 2)H
+ 4(L1 + L2N +N 2 + S2)SNH(T1 +KT − τ1 − 1).
Proof. Recall that the R3 is as follows:
R3 = Eθ∗ kT∑ k=1
[ tk+1−1∑
t=tk+τ1+1
(∑ r P [r | b̂t, at, θk]v (b′, θk)− v(b̂t+1, θk) )] .
This regret terms are dealing with the model estimation errors. That is to say, they depend on the on-policy error between the sampled transition functions and the true transition functions, the sampled reward functions and the true reward functions. Thus we should bound the parameters’ error especially in our unobserved state setting. Based on the parameters in our Dirichlet distribution, we can define the empirical estimation of reward function and transition functions for arm i as follows:
P̂ i(s′ | s)(t) = ϵ1 + (1− ϵ1)ϕis,s′(t) Sϵ1 + (1− ϵ1) ∥∥ϕis,·(t)∥∥1 , R̂i(r | s)(t) = ϵ2 + (1− ϵ2)ψis,r(t) Sϵ2 + (1− ϵ2) ∥∥ψis,·(t)∥∥1 . (28) where ϕis,s′(t) is the parameters in the posterior distribution of P
i at time t, ψis,r(t) is the parameters in the posterior distribution of Ri at time t. We also define the pseudo count N itk(s, a) of the stateaction pair (s, a) before the episode k for arm i as
N itk(s, a) = ∥∥ψis,·(tk)∥∥1 − ∥∥ψis,·(0)∥∥1 .
For notational simplicity, we use z = (s, a) ∈ S ×A and zt = (st, at) to denote the corresponding state-action pair. Then based on Lemma 7 we can decompose the R3 as follows,
R3 = Eθ∗ [ kT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ r P [r | b̂t, at, θk]v (b′, θk)− v(b̂t+1, θk) )]
= Eθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 [∑ r ( P (r | b̂t, at, θk)− P (r | b∗t , at, θ∗) ) v(b′, θk) ]] ≤ R03 +R13 +R23
where
R03 = HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥P ∗ − Pk∥1 ] ,
R13 = HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ||b∗t − b̂t||1 ] ,
R23 = S 2HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥R∗ −Rk∥1 ] .
Note that the following results are all focused on one arm. Define P i∗ is the true transition function for arm i, P ik is the sampled transition function for arm i. We can extend the results on a arm to the aggregated large MDP based on Lemma 11.
Bounding R03. Since 0 ≤ v (b′, θk) ≤ H from our assumption , each term in the inner summation is bounded by ∑
s′∈S | ( P i∗ (s ′ | zt)− P ik (s′ | zt) ) |v (s′, θk)
≤H ∑ s′∈S ∣∣P i∗ (s′ | zt)− P ik (s′ | zt)∣∣ ≤H
∑ s′∈S ∣∣∣P i∗ (s′ | zt)− P̂ ik (s′ | zt)∣∣∣+H ∑ s′∈S ∣∣∣P ik (s′ | zt)− P̂ ik (s′ | zt)∣∣∣ . where P i∗ (s
′ | zt) is the true transition function, P ik (s′ | zt) is the sampled reward function and P̂ ik (s
′ | zt) is the posterior mean. The second inequality above in due to triangle inequality. Let Mik be the set of plausible MDPs in episode k with reward functionR (r | z) and transition function P (s′ | z) satisfying,∑
s′∈S ∣∣∣P (s′ | z)− P̂ ik (s′ | z)∣∣∣ ≤ βik(z), ∑ r∈R ∣∣∣R (r | z)− R̂ik (r | z)∣∣∣ ≤ βik(z), where βik(s, a) := √ 14S log(2NtkT )
max{1,Nitk (s,a)} is chosen conservatively (Auer et al., 2008) so thatMik con-
tains both P i∗ and P i k, R i ∗ and R i k with high probability. P i ∗ and R i ∗ are the true parameters as we defined in section 4.1. Note that βik(z) is the confidence set with δ = 1/tk. Recall the definition ofψ, we can define the pseudo count of state-action pair (s, a) as N itk(s, a) = ∥∥ψis,·(tk)∥∥1 − ∥∥ψis,·(0)∥∥1. Then we can obtain,∑
s′∈S ∣∣∣P i∗ (s′ | zt)− P̂ ik (s′ | zt)∣∣∣+ ∑ s′∈S ∣∣∣P ik (s′ | zt)− P̂ ik (s′ | zt)∣∣∣ ≤ 2βik (zt) + 2 ( I{P i∗ /∈Bk} + I{P ik /∈Bk} ) .
(29)
We assume the length of the last episode is the biggest. Note that even the assumption does not hold, we can enlarge the sum items as TKT−1 − τ1. This does not affect the order of our regret bound. With our assumption, because the all episode length is not bigger than the last episode, that is tk+1 − 1− (tk + τ1) ≤ TKT − τ1, then we can obtain,
KT∑ k=1 tk+1−1∑ t=tk+τ1 βik (zt) ≤ KT∑ k=1 TkT −τ1∑ t=1 βik (zt) . (30)
Note that ∑ s′∈S ∣∣∣P i∗ (s′ | zt)− P̂ ik (s′ | zt)∣∣∣ ≤ 2 is always true. And with our assumption τ1 ≤ T1+KT−1
2 , it is easy to show that when N i tk ≥ TkT − τ1, βik (zt) ≤ 2 holds. Then we can obtain,
KT∑ k=1 TkT −τ1∑ t=1 min{2, βik (zt)} ≤ KT∑ k=1 TkT −τ1∑ t=1 2I(N itk < TkT − τ1)
+ KT∑ k=1 TkT −τ1∑ t=1 I(N itk ≥ TkT − τ1)
√ 14S log (2NtkT )
max ( 1, N itk (zt)
) . (31)
Consider the first part in equation 31. Obviously, the maximum ofN itk is TkT −τ1. Because there are totally SA state-action pairs, therefore, the first part in equation equation 31 can be bounded as,∑KT k=1 ∑TkT −τ1 t=1 2I(N itk < TkT − τ1) ≤ 2(TkT − τ1)SA. Due to TkT = T1 +KT − 1 and Lemma 10, we get , 2(TkT − τ1)SA = 2(T1 +KT − τ1 − 1)SA = O( √ T ).
Consider the second part in 31. Denote the N it (s, a) is the count of (s, a) before time t(not including t). Due to we just consider the exploration phase in each episode, then N it (s, a) can be calculated as follows,
N it (s, a) = ∣∣{τ < t, τ ∈ [tk, tk + τ1], k ≤ k(t) : (siτ , aiτ) = (s, a)}∣∣ ,
where k(t) is the episode number where the time t is in.
In the second part in equation 31, when N itk ≥ TkT − τ1, based on our assumption τ1 ≤ T1+KT−1
2 , we can get,
τ1 ≤ T1 +KT − 1
2 ,
2τ1 ≤ T1 +KT − 1 = TkT . therefore, TkT − τ1 ≥ τ1. Because N itk ≥ TkT − τ1, then N i tk (s, a) ≥ τ1. For any t ∈ [tk, tk + τ1],we have N it (s, a) ≤ N itk(s, a) + τ1 ≤ 2N i tk (s, a).
Therefore N it (s, a) ≤ 2N itk(s, a). Next we can bound the confidence set when Nt(s, a) ≤ 2Ntk(s, a) as follows,
KT∑ k=1 TkT −τ1∑ t=1 βik (zt) ≤ KT∑ k=1 tk+1−1∑ t=tk
√ 14S log (2NtkT )
max ( 1, N itk (zt) ) ≤
KT∑ k=1 tk+1−1∑ t=tk
√ 14S log (2NT 2)
max ( 1, N itk (zt) ) =
T∑ t=1
√ 28S log (2NT 2)
max ( 1, N it (zt) ) ≤ √ 56S log(2NT )
T∑ t=1 1√ max ( 1, N it (zt) ) .
(32)
where the second inequality in equation 32 is due to tk ≤ T for all episodes and the first equality is due to N it (s, a) ≤ 2N itk(s, a).
Then similar to Ouyang et al. (2017), since N it (zt) is the count of visits to zt, we have T∑ t=1 1√ max ( 1, N it (zt) ) =∑ z T∑ t=1 I{zt=z}√ max ( 1, N it (z)
) = ∑ z I{NiT+1(z)>0} + NiT+1(z)−1∑ j=1 1√ j
≤ ∑ z ( I{NiT+1(z)>0} + 2 √ N iT+1(z) ) ≤ 3 ∑ z √ N iT+1(z).
Since ∑ z N i T+1(z) ≤ T , we have
3 ∑ z √ N iT+1(z) ≤ 3 √ SN ∑ z N iT+1(z) = 3 √ SNT . (33)
With equation 32 and equation 33 we get
2H KT∑ k=1 tk+1−1∑ t=tk βik (zt) ≤ 6 √ 56HS √ NT log(NT ) ≤ 48HS √ NT log(NT ).
Then we can bound the equation 30 as follows,
KT∑ k=1 tk+1−1∑ t=tk βik (zt) ≤ 24S √ NT log(NT ) + 2SA(T1 +KT − τ1 − 1). (34)
Choose the δ = 1/T in Lemma 12, and based by Lemma 13, we obtain that
P ( P ik /∈ Bk ) = P ( P i∗ /∈ Bk ) ≤ 1
15Tt6k .
Then we can obtain, 2Eθ∗ [ KT∑ k=1 Tk ( I{θ∗ /∈Bk} + I{θk /∈Bk} )] ≤ 4 15 ∞∑ k=1 t−6k ≤ 4 15 ∞∑ k=1 k−6 ≤ 1. (35)
Therefore we obtain
2HEθ∗ [ KT∑ k=1 Tk ( I{θ∗ /∈Bk} + I{θk /∈Bk} )] ≤ H. (36)
Therefore, we can obtain the bound for one arm as follows, Eθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 (∑ s′∈S ( P i∗ (s ′ | zt)− P ik (s′ | zt) ) v (s′, θk) )] ≤ H + 4SNH(T1 +KT − τ1 − 1) + 48HS √ NT log(NT ).
(37)
Next we consider the state transition of all arms. Recall that the states of all arms at time t is st. Because every arm evolves independently, then the transition probability from state st to state st+1 is as follows,
P (st+1 | st, θ∗) = N∏ i=1 P i∗ ( sit+1 | sit ) ,
where P i∗ is the true transition functions of arm i. Based by the Lemma 11 and our assumption that all arms have the same state space S, we can obtain∑
st+1 |P (st+1 | st, θ∗)− P (st+1 | st, θk)| ≤ N∑ i ∥∥P i∗ (sit+1 | sit)− P ik (sit+1 | sit)∥∥1 ≤ N ∥∥P i∗ (sit+1 | sit)− P ik (sit+1 | sit)∥∥1 . (38)
Therefore, we can bound the R03 as follows: R03 ≤ NH + 4SN2H(T1 +KT − τ1 − 1) + 48SNH √ NT log(NT ). (39)
Bounding R13. Based on the Proposition 2, we know that∥∥∥b∗t − b̂t∥∥∥ 1 ≤ L1∥R∗ −Rk∥1 + L2 max s ∥P ∗(s, :)− Pk(s, :)∥2 .
Note that the elements in the true transition matrix P ∗ and the sampled matrix Pk is between the interval (0, 1). Then based on the facts about norm, we know that
max s ∥P ∗(s, :)− Pk(s, :)∥2 ≤ ∥P ∗ − Pk∥1 .
Therefore , we can bound the belief error at any time as follows:∥∥∥b∗t − b̂t∥∥∥ 1 ≤ L1∥R∗ −Rk∥1 + L2∥P ∗ − Pk∥1. (40)
Recall in the confidence for Mk, the error bound is the same for ∥R∗ −Rk∥1 and ∥P ∗ − Pk∥1, and based by the bound in equation 34 and equation 35, we can bound the R13 as follows:
R13 ≤ HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk (L1∥R∗ −Rk∥1 + L2∥P ∗ − Pk∥1) ]
≤ (L1 + L2N)HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk ( 2βik (zt) + 2 ( I{P∗ /∈Bk} + I{Pk /∈Bk} ))] ≤ 48(L1 + L2N)SH √ NT log(NT ) + (L1 + L2N)H
4(L1 + L2N)SNH(T1 +KT − τ1 − 1).
(41)
Bounding R23. Based on equation 34 and equation 35, we can bound R23 as follows,
R23 = S 2HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ∥R∗(· | s)−Rk(· | s)∥1 ]
≤ S2HEθ∗ [ KT∑ k=1 tk+1−1∑ t=tk+τ1+1 ( 2βik (zt) + 2 ( I{R∗ /∈Bk} + I{Rk /∈Bk} ))] ≤ HS2 + 4S3NH(T1 +KT − τ1 − 1) + 48HS3 √ NT log(NT ).
(42)
Combine the bound in equation 39, equation 41 and equation 42, we bound the term R3 as follows: R3 ≤ 48(L1 + L2N)SH √ NT log(NT ) + 4(L1 + L2N)SNH(T1 +KT − τ1 − 1)
+ (L1 + L2N)H +NH + 4SN 2H(T1 +KT − τ1 − 1) + 48SNH √ NT log(NT )
+HS2 + 4S3NH(T1 +KT − τ1 − 1) + 48HS3 √ NT log(NT )
= 48(L1 + L2N +N + S 2)SH √ NT log(NT ) + (L1 + L2N +N + S 2)H
+ 4(L1 + L2N +N 2 + S2)SNH(T1 +KT − τ1 − 1).
(43)
B.4 BOUND R4
Lemma 9. R4 satisfies the following bound R4 ≤ 48(L1 + L2N +N + S2)Srmax √ NT log(NT ) + (L1 + L2N +N + S 2)rmax
+ 4(L1 + L2N +N + S 2)SArmax(T1 +KT − | 1. What is the focus of the paper regarding partially observable multi-armed bandits?
2. What are the strengths and weaknesses of the proposed algorithm, particularly in terms of regret bound and Assumption 1?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the paper's contributions and relevance to the field? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes an algorithm, called TSEETC that aims at providing a low-regret learning algorithm for partially observable multi-armed bandits. In this paper, the decision maker faces N Markov reward processes and chooses which arm to activate at each time instant. The state of a given arm is revealed to the decision maker only when this arm is activated. As is classically done in the literature, by changing the state space, this Markovian bandit problem with non-observable states is transformed in a restless Markovian bandit problem with observable state.
The authors assume that the decision maker does not know the transition probabilities nor reward and design a learning algorithm for this setting. Under a quite strong condition that all states can ve attained with probability \varepsilon, they prove that this algorithm has a
O
(
T
)
regret.
Strengths And Weaknesses
I like the problem studied in this paper. This particular restless bandit setting has been studied in the literature and can represent many problems one to send probes to estimate communication channels' quality. The regret bound is not particularly new but improves over previous work.
Weaknesses:
My main problem with this paper is that the contributions are hidden: the authors explain their results but their is little comment about their relevance or their originality. For instance: the main claim of the authors is that the algorithm has a O(\sqrt{t}) Bayesian regret. To me, it seems that any classical learning algorithm will have a linear regret for this problem (the only difficulty seems the unbounded state space if an arm is not activated for a long time). Hence: what makes this algorithm interesting and what is this specific form of explore-and-commit with multiple episodes? Is this specific to this particular restless bandits or could/should it be used elsewhere?
Also, Assumption 1 seems quite strong. In particular, it is not satisfied by the "Dirichlet prior" studied in the paper. It seems that this diminishes the value of the proposed algorithm because the theoretical properties are not applicable to the algorithm studied.
Minor comments:
The paper needs an oracle to compute the policy.
page 4: in general, for weakly communicating MDPs, there are many functions satisfying (2) (not just up to an additive constant). Hence the span is not clearly defined.
On Figures 6: the authors plot the log of the performance as a function of the log of time -> using a log-log plot with proper scaling would be easier to read.
Is not Lemma 1 obvious?
Clarity, Quality, Novelty And Reproducibility
The paper has a few English typos at different places which sometimes makes the mathematical difficult to parse. The setting studied in the paper is quite classical. The novelty is harder to judge for me (see my comment in the "weaknesses" above) but the method and algorithm proposed seems quite classical.
I could not find the code to check reproducibility. |
ICLR | Title
Optimization Variance: Exploring Generalization Properties of DNNs
Abstract
Unlike the conventional wisdom in statistical learning theory, the test error of a deep neural network (DNN) often demonstrates double descent: as the model complexity increases, it first follows a classical U-shaped curve and then shows a second descent. Through bias-variance decomposition, recent studies revealed that the bell-shaped variance is the major cause of model-wise double descent (when the DNN is widened gradually). This paper investigates epoch-wise double descent, i.e., the test error of a DNN also shows double descent as the number of training epoches increases. Specifically, we extend the bias-variance analysis to epoch-wise double descent, and reveal that the variance also contributes the most to the zero-one loss, as in model-wise double descent. Inspired by this result, we propose a novel metric, optimization variance (OV), to measure the diversity of model updates caused by the stochastic gradients of random training batches drawn in the same iteration. OV can be estimated using samples from the training set only but correlates well with the (unknown) test error. It can be used to predict the generalization ability of a DNN when the zero-one loss is used in test, and hence early stopping may be achieved without using a validation set.
1 INTRODUCTION
Deep Neural Networks (DNNs) usually have large model capacity, but also generalize well. This violates the conventional VC dimension (Vapnik, 1999) or Rademacher complexity theory (ShalevShwartz & Ben-David, 2014), inspiring new designs of network architectures (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016; Zagoruyko & Komodakis, 2016) and reconsideration of their optimization and generalization (Zhang et al., 2017; Arpit et al., 2017; Wang et al., 2018; Kalimeris et al., 2019; Rahaman et al., 2019; Allen-Zhu et al., 2019).
Model-wise double descent, i.e., as a DNN’s model complexity increases, its test error first shows a classical U-shaped curve and then enters a second descent, has been observed on many machine learning models (Advani & Saxe, 2017; Belkin et al., 2019a; Geiger et al., 2019; Maddox et al., 2020; Nakkiran et al., 2020). Multiple studies provided theoretical evidence of this phenomenon in some tractable settings (Mitra, 2019; Hastie et al., 2019; Belkin et al., 2019b; Yang et al., 2020; Bartlett et al., 2020; Muthukumar et al., 2020). Specifically, Neal et al. (2018) and Yang et al. (2020) performed bias-variance decomposition for mean squared error (MSE) and the cross-entropy (CE) loss, and empirically revealed that the bell-shaped curve of the variance is the major cause of modelwise double descent. Maddox et al. (2020) proposed to measure the effective dimensionality of the parameter space, which can be further used to explain model-wise double descent.
Recently, a new double descent phenomenon, epoch-wise double descent, was observed, when increasing the number of training epochs instead of the model complexity1 (Nakkiran et al., 2020). Compared with model-wise double descent, epoch-wise double descent is relatively less explored. Heckel & Yilmaz (2020) showed that epoch-wise double descent occurs in the situation where different parts of DNNs are learned at different epochs, which can be eliminated by proper scaling of step sizes. Zhang & Wu (2020) discovered that the energy ratio of the high-frequency components of a DNN’s prediction landscape, which can reflect the model capacity, switches from increase to
1In practice, some label noise is often added to the training set to make the epoch-wise double descent more conspicuous.
decrease at a certain training epoch, leading to the second descent of the test error. However, this metric fails to provide further information on generalization, such as the early stopping point, or how the size of a DNN influences its performance.
This paper utilizes bias-variance decomposition of the zero-one (ZO) loss (CE loss is still used in training) to further investigate epoch-wise double descent. By monitoring the behaviors of the bias and the variance, we find that the variance plays an important role in epoch-wise double descent, which dominates and highly correlates with the variation of the test error.
Though the variance correlates well with the test error, estimating its value requires training models on multiple different training sets drawn from the same data distribution, whereas in practice usually only one training set is available2. Inspired by the fact that the source of variance comes from the random-sampled training sets, we propose a novel metric, optimization variance (OV), to measure the diversity of model updates caused by the stochastic gradients of random training batches drawn in the same iteration. This metric can be estimated from a single model using samples drawn from the training set only. More importantly, it correlates well with the test error, and thus can be used to determine the early stopping point in DNN training, without using any validation set.
Some complexity measures have been proposed to illustrate the generalization ability of DNNs, such as sharpness (Keskar et al., 2017) and norm-based measures (Neyshabur et al., 2015). However, their values rely heavily on the model parameters, making comparisons across different models very difficult. Dinh et al. (2017) shows that by re-parameterizing a DNN, one can alter the sharpness of its searched local minima without affecting the function it represents; Neyshabur et al. (2018) shows that these measures cannot explain the generalization behaviors when the size of a DNN increases. Our proposed metric, which only requires the logit outputs of a DNN, is less dependent on model parameters, and hence can explain many generalization behaviors, e.g., the test error decreases as the network size increases. Chatterji et al. (2020) proposed a metric called Model Criticality that can explain the superior generalization performance of some architectures over others, yet it is unexplored that whether this metric can be used to indicate generalization in the entire training procedure, especially for some relatively complex generalization behaviors, such as epoch-wise double descent.
To summarize, our contributions are:
• We perform bias-variance decomposition on the test error to explore epoch-wise double descent. We show that the variance dominates the variation of the test classification error.
• We propose a novel metric, OV, which is calculated from the training set only and correlates well with the test classification error.
• Based on the OV, we propose an approach to search for the early stopping point without using a validation set, when the zero-one loss is used in test. Experiments verified its effectiveness.
The remainder of this paper is organized as follows: Section 2 introduces the details of tracing bias and variance over training epochs. Section 3 proposes the OV and demonstrates its ability to indicate the test behaviors. Section 4 draws conclusions and points out some future research directions.
2 BIAS AND VARIANCE IN EPOCH-WISE DOUBLE DESCENT
This section presents the details of tracing the bias and the variance during training. We show that the variance dominates the epoch-wise double descent of the test error.
2.1 A UNIFIED BIAS-VARIANCE DECOMPOSITION
Bias-variance decomposition is widely used to analyze the generalization properties of machine learning algorithms (Geman et al., 1992; Friedman et al., 2001). It was originally proposed for
2Assume the training set has n samples. We can partition it into multiple smaller training sets, each with m samples (m < n), and then train multiple models. However, the variance estimated from this case would be different from the one estimated from training sets with n samples. We can also bootstrap the original training set into multiple ones, each with n samples. However, the data distribution of each bootstrap replica is different from the original training set, and hence the estimated variance would also be different.
the MSE loss and later extended to other loss functions, e.g., CE and ZO losses (Kong & Dietterich, 1995; Tibshirani, 1996; Kohavi et al., 1996; Heskes, 1998). Our study utilizes a unified bias-variance decomposition that was proposed by Domingos (2000) and applicable to arbitrary loss functions.
Let (x, t) be a sample drawn from the data distribution D, where x ∈ Rd denotes the ddimensional input, and t ∈ Rc the one-hot encoding of the label in c classes. The training set T = {(xi, ti)}ni=1 ∼ Dn is utilized to train the model f : Rd → Rc. Let y = f(x; T ) ∈ Rc be the probability output of the model f trained on T , and L(t,y) the loss function. The expected loss ET [L(t,y)] should be small to ensure that the model both accurately captures the regularities in its training data, and also generalizes well to unseen data.
According to Domingos (2000), a unified bias-variance decomposition3 of ET [L(y, t)] is:
ET [L(t,y)] = L(t, ȳ)︸ ︷︷ ︸ Bias +β ET [L(ȳ,y)]︸ ︷︷ ︸ Variance , (1)
where β takes different values for different loss functions, and ȳ is the expected output:
ȳ = arg min y∗∈Rc ∣∣∑c k=1 y ∗ k=1,y ∗ k≥0 ET [L(y∗,y)]. (2)
ȳ minimizes the variance term in (1), which can be regarded as the “center" or “ensemble" of y w.r.t. different T . Table 1 shows specific forms of L, ȳ, and β for different loss functions (the detailed derivations can be found in Appendix A). This paper focuses on the bias-variance decomposition of the ZO loss, because epoch-wise double descent of the test error is more obvious when the ZO loss is used (see Appendix C). To capture the overall bias and variance, we analyzed Ex,tET [L(t,y)], i.e., the expectation of ET [L(t,y)] over the distribution D.
2.2 TRACE THE BIAS AND VARIANCE TERMS OVER TRAINING EPOCHS
To trace the bias term Ex,t[L(t, ȳ)] and the variance term Ex,tET [L(ȳ,y)] w.r.t. the training epoch, we need to sample several training sets and train models on them respectively, so that the bias and variance terms can be estimated from them.
Concretely, let T ∗ denote the test set, f(x; Tj , q) the model f trained on Tj ∼ Dn (j = 1, 2, ...,K) for q epochs. Then, the estimated bias and variance terms at the q-th epoch, denoted as B(q) and V (q), respectively, can be written as:
B(q) = E(x,t)∈T ∗ [ L ( t, f̄(x; q) )] , (3)
V (q) = E(x,t)∈T ∗ 1 K K∑ j=1 L(f̄(x; q), f(x; Tj , q)) , (4) 3In real-world situations, the expected loss consists of three terms: bias, variance, and noise. Similar to
Yang et al. (2020), we view t as the groundtruth and ignore the noise term.
where
f̄(x; q) = H K∑ j=1 H(f(x; Tj , q)) , (5) is the voting result of {f(x; Tj , q)}Kj=1.
We should emphasize that, in real-world situations, D cannot be obtained, hence Tj in our experiments was randomly sampled from the training set (we sampled 50% training data for each Tj). As a result, despite of showing the cause of epoch-wise double descent, the behaviors of bias and variance may be different when the whole training set is used.
We considered ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) models4 trained on SVHN (Netzer et al., 2011), CIFAR10 (Krizhevsky, 2009), and CIFAR100 (Krizhevsky, 2009). SGD and Adam (Kingma & Ba, 2014) optimizers with different learning rates were used. The batchsize was set to 128, and all models were trained for 250 epochs with data augmentation. Prior to sampling {Tj}Kj=1 (K = 5) from the training set, 20% labels of the training data were randomly shuffled to introduce epoch-wise double descent.
Figure 1 shows the expected ZO loss and its bias and variance. The bias descends rapidly at first and then generally converges to a low value, whereas the variance behaves almost exactly the same as the test error, mimicking even small fluctuations of the test error. To stabilize that, we performed additional experiments with different optimizers, learning rates, and levels of label noise (see Appendices E and G). All experimental results demonstrated that it is mainly the variance that contributes to epoch-wise double descent.
2.3 DISCUSSION
Contradicting to the traditional view that the variance keeps increasing because of overfitting, our experimental results show a more complex behavior: the variance starts high and then decreases rapidly, followed by a bell curve. The difference at the beginning (when the number of epochs is small) is mainly due to the choice of loss functions (see experimental results of bias-variance decomposition for MSE and CE losses in Appendix F). CE and MSE losses, analyzed in the traditional
4Adapted from https://github.com/kuangliu/pytorch-cifar
learning theory, can reflect the degree of difference of probabilities, whereas the ZO loss only the labels. At the early stage of training, the output probabilities are close to random guesses, and hence a small difference in probabilities may lead to completely different labels, resulting in the distinct variance for different loss functions. However, the reason why the variance begins to diminish at the late phase of training is still unclear. We will explore this problem in our future research.
3 OPTIMIZATION VARIANCE (OV)
This section proposes a new metric, OV, to measure the diversity of model updates introduced by random training batches during optimization. This metric can indicate test behaviors without any validation set.
3.1 NOTATION AND DEFINITION
Section 2 verified the synchronization between the test error and the variance, but its application is limited because estimating the variance requires: 1) a test set, and, 2) models trained on different training sets drawn from the same data distribution. It’d be desirable to capture the test behavior of a DNN using a single training set only, without a test set.
According to the definition in (1), the variance measures the model diversity caused by different training samples drawn from the same distribution, i.e., the outputs of DNN change according to the sampled training set. As the gradients are usually the only information transferred from training sets to models during the optimization of DNN, we need to measure the variance of a DNN introduced by the gradients calculated from different training batches. More specifically, we’d like to develop a metric to reflect the function robustness of DNNs to sampling noises. If the function captured by a DNN drastically varies w.r.t. different training batches, its generalization error is very likely to be poor due to a large variance introduced by the optimization procedure. A similar metric is the sharpness of local minima proposed by Keskar et al. (2017), which measures the robustness of local minima as an indicator of the generalization error. However, this metric is only meaningful for local minima and hence cannot be applied in the entire optimization process.
Mathematically, for a sample (x, t) ∼ D, let f(x;θ) be the logit output of a DNN with parameter θ. Let TB ∼ Dm be a training batch with m samples, g : TB → R|θ| the optimizer outputting the update of θ based on TB . Then, we can get the function distribution Fx(TB) over a training batch TB , i.e., f(x;θ + g(TB)) ∼ Fx(TB). The variance of Fx(TB) reflects the model diversity caused by different training batches. The formal definition of OV is given below. Definition 1 (Optimization Variance (OV)) Given an input x and model parameters θq at the q-th training epoch, the OV on x at the q-th epoch is defined as
OVq(x) , ETB
[ ‖f(x;θq + g(TB))− ETBf(x;θq + g(TB))‖ 2 2 ] ETB [ ‖f(x;θq + g(TB))‖22
] . (6) Note that OVq(x) measures the relative variance, because the denominator in (6) eliminates the influence of the logit’s norm. In this way, OVq(x) at different training phases can be compared. The motivation here comes from the definition of coefficient of variation5 (CV) in probability theory and statistics, which is also known as the relative standard deviation. CV is defined as the ratio between the standard deviation and the mean, and is independent of the unit in which the measurement is taken. Therefore, CV enables comparing the relative diversity between two different measurements.
In terms of OV, the variance of logits, i.e., the numerator of OV, is not comparable across epochs due to the influence of their norm. In fact, even if the variance of logits maintains the same during the whole optimization process, its influence on the decision boundary is limited when the logits are large. Consequently, by treating the norm of logits as the measurement unit, following CV we set OV to ∑ i σ 2 i / ∑ i µ 2 i , where σi and µi represent the standard deviation and the mean of the i-th logit, respectively. If we remove the denominator, the value of OV will no longer has the indication ability for generalization error, especially at the early stage of the optimization process.
5https://en.wikipedia.org/wiki/Coefficient_of_variation
Intuitively, the OV represents the inconsistency of gradients’ influence on the model. If OVq(x) is very large, then the models trained with different sampled TB may have distinct outputs for the same input, leading to high model diversity and hence large variance. Note that here we emphasize the inconsistency of model updates rather than the gradients themselves. The latter can be measured by the gradient variance. The gradient variance and the OV are different, because sometimes diverse gradients may lead to similar changes of the function represented by DNN, and hence small OV. More on the relationship between the two variances can be found in Appendix B.
3.2 EXPERIMENTAL RESULTS
We calculated the expectation of the OV over x, i.e., Ex[OVq(x)], which was estimated from 1,000 random training samples. The test set was not involved at all.
Figure 2 shows how the test accuracy (solid curves) and Ex[OVq(x)] (dashed curves) change with the number of training epochs. Though sometimes the OV may not exhibit clear epoch-wise double descent, e.g., VGG16 in Figure 2(c), the symmetry between the solid and dashed curves generally exist, suggesting that the OV, which is calculated from the training set only, is capable of predicting the variation of the test accuracy. Similar results can also be observed using different optimizers and learning rates (see Appendix I).
Note that epoch-wise double descent is not a necessary condition for applying OV. As shown in Figure 2 (blue curves), we compared the values of OV and generalization errors of DNNs when there are 0% label noises, from which we can see the curves of generalization errors have no epochwise double descent, yet the proposed OV still works pretty well.
OVq(x) in Figure 2 was estimated on all training batches; however, this may not be necessary: a small number of training batches are usually enough. To demonstrate this, we trained ResNet and VGG on several datasets using Adam optimizer with learning rate 0.0001, and estimated OVq(x) from different number of training batches. The results in Figure 3 show that we can well estimate the OV using as few as 10 training batches.
Another intriguing finding is that even unstable variations of the test accuracy can be reflected by the OV. This correspondence is clearer on simpler datasets, e.g., MNIST (LeCun et al., 1998) and FashionMNIST (Xiao et al., 2017). Figure 4 shows the test accuracy and OV for LeNet-5 (LeCun et al., 1998) trained on MNIST and FashionMNIST without label noise. Spikes of the OV and the test accuracy happen simultaneously at the same epoch.
Our experimental results demonstrate that the generalization ability of a DNN can be indicated by the OV during stochastic training, without using a validation set. This phenomenon can be used to determine the early stopping point, and beyond.
3.3 EARLY STOPPING WITHOUT A VALIDATION SET
The common process to train a DNN involves three steps: 1) partition the dataset into a training set and a validation set; 2) use the training set to optimize the DNN parameters, and the validation set to determine when to stop training, i.e., early stopping, and record the early stopping point; 3) train the DNN on the entire dataset (combination of training and validation sets) for the same number of epochs. However, there is no guarantee that the early stopping point on the training set is the same as the one on the entire dataset. So, an interesting questions is: is it possible to directly perform early stopping on the entire dataset, without a validation set?
The OV can be used for this purpose. For more robust performance, instead of using the OV directly, we may need to smooth it to alleviate random fluctuations.
As an example, we smoothed the OV by a moving average filter of 10 epochs, and then performed early stopping on the smoothed OV with a patience of 10 epochs. As a reference, early stopping with the same patience was also performed directly on the test accuracy to get the groundtruth. However, it should be noted that the latter is unknown in real-world applications. It is provided for verification purpose only.
We trained different DNN models on several datasets (SVHN: VGG11 and ResNet18; CIFAR10: VGG13 and ResNet18; CIFAR100: VGG16 and ResNet34) with different levels of label noise (10% and 20%) and optimizers (Adam with learning rate 0.001 and 0.0001, SGD with momentum 0.9 and learning rate 0.01 and 0.001). Then, we compared the groundtruth early stopping point and the test accuracy with those found by performing early stopping on the OV6. The results are shown in Figure 5. The true early stopping points and those found from the OV curve were generally close, though there were some exceptions, e.g., the point near (40, 100) in Figure 5(a). However, the test errors, which are what a model designer really cares about, were always close.
3.4 SMALL SIZE OF THE TRAINING SET
For large datasets, a validation set can be easily partitioned from the training set without hurting the generalization performance. Therefore, OV is more useful in the case of small datasets. To this end, we performed experiments with small numbers (2000, 4000, 6000) of the training samples in CIFAR10 to verify the effectiveness of OV in this situation. Considering the limited numbers of training samples, we trained a small Convolution Neural Network (CNN) using Adam optimizer with learning rate 0.0001, whose detailed information can be found in Appendix H.
The experimental results are shown in Figure 6. It can be observed that: a) When the size of the training set is small, OV still correlates well with the generalization performance as a function of the training epochs, which ver-
ifies the validity of our results on small datasets; b) As expected, more training samples usually lead to better generalization performance, which can also be reflected by comparing the values of OV.
6Training VGG11 on SVHN with Adam optimizer and learning rate 0.001 was unstable (see Appendix D), so we did not include its results in Figure 5.
3.5 NETWORK SIZE
In addition to indicating the early stopping point, the OV can also explain some other generalization behaviors, such as the influence of the network size. To verify that, we trained ResNet18 with different network sizes on CIFAR10 for 100 epochs with no label noise, using Adam optimizer with learning rate 0.0001. For each convolutional layer, we set the number of filters k/4 (k = 1, 2, ..., 8) times the number of filters in the original model. We then examined the OV of ResNet18 with different network sizes to validate its correlation with the test accuracy. Note that we used SGD optimizer with learning rate 0.001 and no momentum to calculate the OV, so that the cumulative influence during training can be removed to make the comparison more fair.
The results are shown in Figure 7. As k increases, the OV gradually decreases, i.e., the diversity of model updates introduced by different training batches decreases when widening ResNet18, suggesting that increasing the network size can improve the model’s resilience to sampling noise, which leads to better generalization performance. The Pearson correlation coefficient between the OV and the test accuracy reached −0.94 (p = 0.0006). Lastly, we need to point out that we did not observe a strong cross-model correlation between the OV and the test accuracy when comparing the generalization ability of significantly different model architectures, e.g., VGG
and ResNet. Our future research will look for a more universal cross-model metric to illustrate the generalization performance.
4 CONCLUSIONS
This paper has shown that the variance dominates the epoch-wise double descent, and highly correlates with the test error. Inspired by this finding, we proposed a novel metric called optimization variance, which is calculated from the training set only but powerful enough to predict how the test error changes during training. Based on this metric, we further proposed an approach to perform early stopping without any validation set. Remarkably, we demonstrated that the training set itself may be enough to predict the generalization ability of a DNN, without a dedicated validation set.
Our future work will: 1) apply the OV to other tasks, such as regression problems, unsupervised learning, and so on; 2) figure out the cause of the second descent of the OV; and, 3) design regularization approaches to penalize the OV for better generalization performance.
A BIAS-VARIANCE DECOMPOSITION FOR DIFFERENT LOSS FUNCTIONS
This section presents detailed deduction of bias-variance decomposition for different loss functions.
A.1 THE MEAN SQUARED ERROR (MSE) LOSS
For the MSE loss, we have L(t,y) = ‖t−y‖22, and need to calculate ȳ based on (2). We first ignore the constraints and solve the following problem:
ỹ = arg min y∗
ET [‖y∗ − y‖22], (7)
whose solution is ỹ = ET y. It can be easily verified that ỹ satisfies the constraints in (2), and hence ȳ = ỹ = ET y.
Then, we can decompose the MSE loss as: ET [‖t− y‖22] = ET [‖t− ȳ + ȳ − y‖22]
= ET [‖t− ȳ‖22 + ‖ȳ − y‖22 + 2(t− ȳ)T (ȳ − y)] = ‖t− ȳ‖22 + ET [‖ȳ − y‖22] + 0, (8)
where the first term denotes the bias, and the second denotes the variance. We can also get β = 1.
A.2 CROSS-ENTROPY (CE) LOSS For L(t,y) = ∑c k=1 tk log tk yk
, ȳ can be obtained by applying the Lagrange multiplier method (Boyd & Vandenberghe, 2004) to (2):
l(y∗, λ) = ET
[ c∑
k=1
y∗k log y∗k yk
] + λ · ( 1−
c∑ k=1 y∗k
) . (9)
To minimize l(y∗, λ), we need to compute its partial derivatives to y∗ and λ: ∂l
∂y∗k = ET
[ log
y∗k yk + 1
] − λ, k = 1, 2, ..., c
∂l ∂λ = 1− c∑ k=1 y∗k.
By setting these derivatives to 0, we get:
ȳk = 1
Z exp{ET [log yk]}, k = 1, 2, ..., c (10) where Z = ∑c k=1 exp{ET [log yk]} is a normalization constant independent of k.
Because
ET
[ c∑
k=1
αk log ȳk yk
] = − logZ, ∀αk c∑ k=1 αk = 1, (11)
we have:
ET
[ c∑
k=1
tk log tk yk
] = ET [ c∑
k=1
tk
( log
tk ȳk + log ȳk yk
)]
= c∑ k=1 tk log tk ȳk + ET
[ c∑
k=1
tk log ȳk yk
]
= c∑ k=1 tk log tk ȳk − logZ
= c∑ k=1 tk log tk ȳk + ET
[ c∑
k=1
ȳk log ȳk yk
] , (12)
from which we obtain β = 1.
A.3 THE ZERO-ONE (ZO) LOSS
For the ZO loss, i.e., L(t,y) = 1con{H(t) 6= H(y)}, ȳ is the voting result, i.e., H(ET [H(y)]), so that the variance can be minimized. However, the value of β depends on the relationship between ȳ and t.
When ȳ = t, we have:
ET [1con{H(t) 6= H(y)}] = 0 + ET [1con{H(ȳ) 6= H(y)}] = 1con{H(t) 6= H(ȳ)}+ ET [1con{H(ȳ) 6= H(y)}] , (13)
clearly, β = 1.
When ȳ 6= t, we have:
ET [1con{H(t) 6= H(y)}] = PT (H(y) 6= t) = 1− PT (H(y) = t) = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t
∣∣H(y) = ȳ)PT (H(y) = ȳ) − PT ( H(y) = t
∣∣H(y) 6= ȳ)PT (H(y) 6= ȳ) . (14) Since ȳ 6= H(t), it follows that
PT ( H(y) = t ∣∣H(y) = ȳ) = 0. (15) Then, (14) becomes:
ET [1con{H(t) 6= H(y)}] = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t ∣∣H(y) 6= ȳ)PT (H(y) 6= ȳ) = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t
∣∣H(y) 6= ȳ)ET [1con{H(ȳ) 6= H(y)}] , (16) hence, β = −PT ( H(y) = t
∣∣H(y) 6= ȳ).
B CONNECTIONS BETWEEN THE OPTIMIZATION VARIANCE AND THE
GRADIENT VARIANCE
This section shows the connection between the gradient variance and the optimization variance in Definition 1.
For simplicity, we ignore q in OVq(x) and denote g(TB)−ETBg(TB) by g̃(TB). Then, the gradient variance Vg can be written as:
Vg = ETB [ ‖g(TB)− ETBg(TB)‖ 2 2 ] = ETB [ g̃(TB)T g̃(TB) ] . (17)
Denote the Jacobian matrix of the logits f(x;θ) w.r.t. θ by Jθ(x), i.e.,
Jθ(x) = [∇θf1(x;θ),∇θf2(x;θ), ...,∇θfc(x;θ)] , (18)
where fj(x;θ) is the j-th entry of f(x;θ), and c is the number of classes.
Using first order approximation, we have:
f(x;θ + g(TB)) ≈ f(x;θ) + Jθ(x)T g(TB), (19)
and OV (x) can be written as:
OV (x) ≈ ETB
[ g̃(TB)TJθ(x)Jθ(x)T g̃(TB) ] f(x;θ)T f(x;θ) + ETB [O (‖g(TB)‖2)] ≈ ETB [ g̃(TB)TJθ(x)Jθ(x)T g̃(TB) ] f(x;θ)T f(x;θ) . (20)
The only difference between Ex [OV (x)] and Vg is the middle weight matrix Ex [ Jθ(x)Jθ(x) T
f(x;θ)T f(x;θ)
] .
This suggests that penalizing the gradient variance can also reduce the optimization variance.
Figure 8 presents the curves of Vg in the training procedure. It can be observed that Vg also shows some ability to indicate the generalization performance. However, compared with the results in Figure 2, we can see that OV demonstrates a stronger power for indicating the generalization error than Vg . More importantly, Vg loses its comparability when the network size increases, while OV
can be more reliable to architectural changes with the middle weight matrix Ex [ Jθ(x)Jθ(x) T
f(x;θ)T f(x;θ)
] to
normalize Vg , which is illustrated in Figure 7.
We also notice that ‖ETBg(TB)‖ 2 2 is usually far less than ETB ‖g(TB)‖ 2 2, hence Vg and the gradient norm ETB ‖g(TB)‖ 2 2 almost present the same curves in the training procedure.
VGG11
VGG13
VGG16
C BEHAVIORS OF DIFFERENT LOSS FUNCTIONS
Many different loss functions can be used to evaluate the test performance of a model. They may have very different behaviors w.r.t. the training epochs. As shown in Figure 9, the epoch-wise double descent can be very conspicuous on test error, i.e., the ZO loss, but barely observable on CE and MSE losses, which increase after the early stopping point. This is because at the late stage of training, model outputs approach 0 or 1, resulting in the increase of the CE and MSE losses on the misclassified test samples, though the decision boundary may be barely changed. When rescaling the weights of the last layer by a positive real number, the ZO loss remains the same because of the untouched decision boundary, whereas the CE and MSE losses are changed. Thus, we perform bias-variance decomposition on the ZO loss to study epoch-wise double descent.
D VGG11 ON SVHN BY ADAM OPTIMIZER WITH LEARNING RATE 0.001
Training VGG11 on SVHN by Adam optimizer with learning rate 0.001 is unstable, as shown in Figure 14(a). Figure 10 shows the test error and optimization variance. For 0% and 10% label noise, the test error stays large (the test accuracy is low) for a long period in the early phase of training. The optimization variance is also abnormal.
E LOSS, BIAS AND VARIANCE W.R.T. DIFFERENT LEVELS OF LABEL NOISE
Label noise makes epoch-wise double descent more conspicuous to observe (Nakkiran et al., 2020). If the variance is the major cause of double descent, it should match the variation of the test error
when adding different levels of label noise. Figure 11 shows an example to compare the loss, variance, and bias w.r.t. different levels of label noise. Though label noise impacts both the bias and the variance, the latter appears to be more sensitive and shows better synchronization with the loss. For instance, when we randomly shuffle a small percentage of labels, say 10%, a valley clearly occurs between 20 and 50 epoches for the variance, whereas it is less obvious for the bias. In addition, it seems that the level of label noise does not affect the epoch at which the loss reaches its first minimum. This is surprising, because the label noise is considered highly related to the complexity of the dataset. Our future work will explore the role label noise plays in the generalization of DNNs.
Loss
Var
Bias
F BIAS AND VARIANCE TERMS W.R.T. DIFFERENT LOSS FUNCTIONS
G BIAS AND VARIANCE TERMS W.R.T. DIFFERENT OPTIMIZERS AND LEARNING RATES
H DETAILED INFORMATION OF THE SMALL CNN MODEL
We present the detailed information of the architecture trained with small numbers of training samples. It consists of two convolutional layers and two fully-connected layers, as shown in Table 2.
I OPTIMIZATION VARIANCE AND TEST ACCURACY W.R.T. DIFFERENT OPTIMIZERS AND LEARNING RATES | 1. What is the focus of the paper regarding trajectory studies?
2. What are the strengths and weaknesses of the proposed optimization variance?
3. How does the paper relate to prior works, particularly Yang et al.'s study on bias-variance trade-off?
4. What are the reviewer's main concerns regarding the paper's contributions and novelty?
5. Can the authors provide theoretical justifications or empirical evidence to support their claims on optimization variance? | Review | Review
Summary:
The paper studies the trajectory of the test error as a function of training time focusing on Epoch-Wise Double-Descent. Similar to "Rethinking Bias-Variance Trade-off for Generalization of Neural Networks" by Yang et. al., the paper shows that if one decomposes the test error to bias and variance terms, Double Descent occurs as a function of train time as a result of unimodality of the variance term (while the bias term decreases monotonically). The paper also introduces a quantity they name optimization variance (OV) and that correlates with the test error (while being only a function of the train set) and can be useful for early stopping.
=====================================================
Main Concerns and improvement points: Regarding the variance as the main culprit behind Double Descent: While it is interesting to see that the analysis of Yang et. al. carries over to epoch double descent, by itself, this experiment is not too surprising. Citing the discussion section "Contradicting to the traditional view that the variance keeps increasing because of overfitting, our experimental results show a more complex behavior: the variance starts high and then decreases rapid". This is exactly the claim made in Yang et. al, for model double descent making the novelty of the empirical result more limited. As the authors mention (and promise to study in the future), understanding why the variance diminished with time after the interpolation threshold is an important question that could substantially strengthen the paper.
Regarding Optimization Variance: While it is cool that the optimization variance not too hard to compute and correlates well with the test set (for a fixed architecture) I'm not sure what is the fundamental contribution here. Just defining a quantity and showing some properties it has without motivation for why and when should we expect this quantity to arise is not a persuasive result. A couple of specific concerns for me are:
Correlation does not mean much by itself. For example, the train accuracy is very correlated with the test accuracy. There are other empirical quantities that correlate with the test (and provably so at times, see for example https://arxiv.org/pdf/1912.00528.pdf).
Usually, training for long enough is sufficient to have high correlation with the test. For example in fig 5 we could ask where would lie a model trained for a very long time (say 100 epochs) my bet it would right there on the line of best test error.
If the argument here is early stopping I can just use the test set.
The quantity does not correlate through different model architectures.
To alleviate my concerns I would love to see some theoretical justification for why is this an important quantity and when does it arise naturally. Alternatively, if we can predict test set across different model architectures, that would give more motivation to as why we might want to look at this quantity more deeply.
To summarize, the first part of the paper is lacking in novelty and in the second part, the OV is not deeply enough motivated. I do believe that the paper is on the right track and with some work should be publishable in a future conference.
Minor Comments: It might be a personal preference, but I find the notation confusing and in disconnect with the standard notation in the field. For example, using t as labels and y as predictions is highly non-standard and would confuse many readers. (why not y for label, \hat y or p for prediction, t for training time?) "The expected loss should be small to ensure good generalization performance" - this is a very informal sentence that I found confusing rather than helpful. |
ICLR | Title
Optimization Variance: Exploring Generalization Properties of DNNs
Abstract
Unlike the conventional wisdom in statistical learning theory, the test error of a deep neural network (DNN) often demonstrates double descent: as the model complexity increases, it first follows a classical U-shaped curve and then shows a second descent. Through bias-variance decomposition, recent studies revealed that the bell-shaped variance is the major cause of model-wise double descent (when the DNN is widened gradually). This paper investigates epoch-wise double descent, i.e., the test error of a DNN also shows double descent as the number of training epoches increases. Specifically, we extend the bias-variance analysis to epoch-wise double descent, and reveal that the variance also contributes the most to the zero-one loss, as in model-wise double descent. Inspired by this result, we propose a novel metric, optimization variance (OV), to measure the diversity of model updates caused by the stochastic gradients of random training batches drawn in the same iteration. OV can be estimated using samples from the training set only but correlates well with the (unknown) test error. It can be used to predict the generalization ability of a DNN when the zero-one loss is used in test, and hence early stopping may be achieved without using a validation set.
1 INTRODUCTION
Deep Neural Networks (DNNs) usually have large model capacity, but also generalize well. This violates the conventional VC dimension (Vapnik, 1999) or Rademacher complexity theory (ShalevShwartz & Ben-David, 2014), inspiring new designs of network architectures (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016; Zagoruyko & Komodakis, 2016) and reconsideration of their optimization and generalization (Zhang et al., 2017; Arpit et al., 2017; Wang et al., 2018; Kalimeris et al., 2019; Rahaman et al., 2019; Allen-Zhu et al., 2019).
Model-wise double descent, i.e., as a DNN’s model complexity increases, its test error first shows a classical U-shaped curve and then enters a second descent, has been observed on many machine learning models (Advani & Saxe, 2017; Belkin et al., 2019a; Geiger et al., 2019; Maddox et al., 2020; Nakkiran et al., 2020). Multiple studies provided theoretical evidence of this phenomenon in some tractable settings (Mitra, 2019; Hastie et al., 2019; Belkin et al., 2019b; Yang et al., 2020; Bartlett et al., 2020; Muthukumar et al., 2020). Specifically, Neal et al. (2018) and Yang et al. (2020) performed bias-variance decomposition for mean squared error (MSE) and the cross-entropy (CE) loss, and empirically revealed that the bell-shaped curve of the variance is the major cause of modelwise double descent. Maddox et al. (2020) proposed to measure the effective dimensionality of the parameter space, which can be further used to explain model-wise double descent.
Recently, a new double descent phenomenon, epoch-wise double descent, was observed, when increasing the number of training epochs instead of the model complexity1 (Nakkiran et al., 2020). Compared with model-wise double descent, epoch-wise double descent is relatively less explored. Heckel & Yilmaz (2020) showed that epoch-wise double descent occurs in the situation where different parts of DNNs are learned at different epochs, which can be eliminated by proper scaling of step sizes. Zhang & Wu (2020) discovered that the energy ratio of the high-frequency components of a DNN’s prediction landscape, which can reflect the model capacity, switches from increase to
1In practice, some label noise is often added to the training set to make the epoch-wise double descent more conspicuous.
decrease at a certain training epoch, leading to the second descent of the test error. However, this metric fails to provide further information on generalization, such as the early stopping point, or how the size of a DNN influences its performance.
This paper utilizes bias-variance decomposition of the zero-one (ZO) loss (CE loss is still used in training) to further investigate epoch-wise double descent. By monitoring the behaviors of the bias and the variance, we find that the variance plays an important role in epoch-wise double descent, which dominates and highly correlates with the variation of the test error.
Though the variance correlates well with the test error, estimating its value requires training models on multiple different training sets drawn from the same data distribution, whereas in practice usually only one training set is available2. Inspired by the fact that the source of variance comes from the random-sampled training sets, we propose a novel metric, optimization variance (OV), to measure the diversity of model updates caused by the stochastic gradients of random training batches drawn in the same iteration. This metric can be estimated from a single model using samples drawn from the training set only. More importantly, it correlates well with the test error, and thus can be used to determine the early stopping point in DNN training, without using any validation set.
Some complexity measures have been proposed to illustrate the generalization ability of DNNs, such as sharpness (Keskar et al., 2017) and norm-based measures (Neyshabur et al., 2015). However, their values rely heavily on the model parameters, making comparisons across different models very difficult. Dinh et al. (2017) shows that by re-parameterizing a DNN, one can alter the sharpness of its searched local minima without affecting the function it represents; Neyshabur et al. (2018) shows that these measures cannot explain the generalization behaviors when the size of a DNN increases. Our proposed metric, which only requires the logit outputs of a DNN, is less dependent on model parameters, and hence can explain many generalization behaviors, e.g., the test error decreases as the network size increases. Chatterji et al. (2020) proposed a metric called Model Criticality that can explain the superior generalization performance of some architectures over others, yet it is unexplored that whether this metric can be used to indicate generalization in the entire training procedure, especially for some relatively complex generalization behaviors, such as epoch-wise double descent.
To summarize, our contributions are:
• We perform bias-variance decomposition on the test error to explore epoch-wise double descent. We show that the variance dominates the variation of the test classification error.
• We propose a novel metric, OV, which is calculated from the training set only and correlates well with the test classification error.
• Based on the OV, we propose an approach to search for the early stopping point without using a validation set, when the zero-one loss is used in test. Experiments verified its effectiveness.
The remainder of this paper is organized as follows: Section 2 introduces the details of tracing bias and variance over training epochs. Section 3 proposes the OV and demonstrates its ability to indicate the test behaviors. Section 4 draws conclusions and points out some future research directions.
2 BIAS AND VARIANCE IN EPOCH-WISE DOUBLE DESCENT
This section presents the details of tracing the bias and the variance during training. We show that the variance dominates the epoch-wise double descent of the test error.
2.1 A UNIFIED BIAS-VARIANCE DECOMPOSITION
Bias-variance decomposition is widely used to analyze the generalization properties of machine learning algorithms (Geman et al., 1992; Friedman et al., 2001). It was originally proposed for
2Assume the training set has n samples. We can partition it into multiple smaller training sets, each with m samples (m < n), and then train multiple models. However, the variance estimated from this case would be different from the one estimated from training sets with n samples. We can also bootstrap the original training set into multiple ones, each with n samples. However, the data distribution of each bootstrap replica is different from the original training set, and hence the estimated variance would also be different.
the MSE loss and later extended to other loss functions, e.g., CE and ZO losses (Kong & Dietterich, 1995; Tibshirani, 1996; Kohavi et al., 1996; Heskes, 1998). Our study utilizes a unified bias-variance decomposition that was proposed by Domingos (2000) and applicable to arbitrary loss functions.
Let (x, t) be a sample drawn from the data distribution D, where x ∈ Rd denotes the ddimensional input, and t ∈ Rc the one-hot encoding of the label in c classes. The training set T = {(xi, ti)}ni=1 ∼ Dn is utilized to train the model f : Rd → Rc. Let y = f(x; T ) ∈ Rc be the probability output of the model f trained on T , and L(t,y) the loss function. The expected loss ET [L(t,y)] should be small to ensure that the model both accurately captures the regularities in its training data, and also generalizes well to unseen data.
According to Domingos (2000), a unified bias-variance decomposition3 of ET [L(y, t)] is:
ET [L(t,y)] = L(t, ȳ)︸ ︷︷ ︸ Bias +β ET [L(ȳ,y)]︸ ︷︷ ︸ Variance , (1)
where β takes different values for different loss functions, and ȳ is the expected output:
ȳ = arg min y∗∈Rc ∣∣∑c k=1 y ∗ k=1,y ∗ k≥0 ET [L(y∗,y)]. (2)
ȳ minimizes the variance term in (1), which can be regarded as the “center" or “ensemble" of y w.r.t. different T . Table 1 shows specific forms of L, ȳ, and β for different loss functions (the detailed derivations can be found in Appendix A). This paper focuses on the bias-variance decomposition of the ZO loss, because epoch-wise double descent of the test error is more obvious when the ZO loss is used (see Appendix C). To capture the overall bias and variance, we analyzed Ex,tET [L(t,y)], i.e., the expectation of ET [L(t,y)] over the distribution D.
2.2 TRACE THE BIAS AND VARIANCE TERMS OVER TRAINING EPOCHS
To trace the bias term Ex,t[L(t, ȳ)] and the variance term Ex,tET [L(ȳ,y)] w.r.t. the training epoch, we need to sample several training sets and train models on them respectively, so that the bias and variance terms can be estimated from them.
Concretely, let T ∗ denote the test set, f(x; Tj , q) the model f trained on Tj ∼ Dn (j = 1, 2, ...,K) for q epochs. Then, the estimated bias and variance terms at the q-th epoch, denoted as B(q) and V (q), respectively, can be written as:
B(q) = E(x,t)∈T ∗ [ L ( t, f̄(x; q) )] , (3)
V (q) = E(x,t)∈T ∗ 1 K K∑ j=1 L(f̄(x; q), f(x; Tj , q)) , (4) 3In real-world situations, the expected loss consists of three terms: bias, variance, and noise. Similar to
Yang et al. (2020), we view t as the groundtruth and ignore the noise term.
where
f̄(x; q) = H K∑ j=1 H(f(x; Tj , q)) , (5) is the voting result of {f(x; Tj , q)}Kj=1.
We should emphasize that, in real-world situations, D cannot be obtained, hence Tj in our experiments was randomly sampled from the training set (we sampled 50% training data for each Tj). As a result, despite of showing the cause of epoch-wise double descent, the behaviors of bias and variance may be different when the whole training set is used.
We considered ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) models4 trained on SVHN (Netzer et al., 2011), CIFAR10 (Krizhevsky, 2009), and CIFAR100 (Krizhevsky, 2009). SGD and Adam (Kingma & Ba, 2014) optimizers with different learning rates were used. The batchsize was set to 128, and all models were trained for 250 epochs with data augmentation. Prior to sampling {Tj}Kj=1 (K = 5) from the training set, 20% labels of the training data were randomly shuffled to introduce epoch-wise double descent.
Figure 1 shows the expected ZO loss and its bias and variance. The bias descends rapidly at first and then generally converges to a low value, whereas the variance behaves almost exactly the same as the test error, mimicking even small fluctuations of the test error. To stabilize that, we performed additional experiments with different optimizers, learning rates, and levels of label noise (see Appendices E and G). All experimental results demonstrated that it is mainly the variance that contributes to epoch-wise double descent.
2.3 DISCUSSION
Contradicting to the traditional view that the variance keeps increasing because of overfitting, our experimental results show a more complex behavior: the variance starts high and then decreases rapidly, followed by a bell curve. The difference at the beginning (when the number of epochs is small) is mainly due to the choice of loss functions (see experimental results of bias-variance decomposition for MSE and CE losses in Appendix F). CE and MSE losses, analyzed in the traditional
4Adapted from https://github.com/kuangliu/pytorch-cifar
learning theory, can reflect the degree of difference of probabilities, whereas the ZO loss only the labels. At the early stage of training, the output probabilities are close to random guesses, and hence a small difference in probabilities may lead to completely different labels, resulting in the distinct variance for different loss functions. However, the reason why the variance begins to diminish at the late phase of training is still unclear. We will explore this problem in our future research.
3 OPTIMIZATION VARIANCE (OV)
This section proposes a new metric, OV, to measure the diversity of model updates introduced by random training batches during optimization. This metric can indicate test behaviors without any validation set.
3.1 NOTATION AND DEFINITION
Section 2 verified the synchronization between the test error and the variance, but its application is limited because estimating the variance requires: 1) a test set, and, 2) models trained on different training sets drawn from the same data distribution. It’d be desirable to capture the test behavior of a DNN using a single training set only, without a test set.
According to the definition in (1), the variance measures the model diversity caused by different training samples drawn from the same distribution, i.e., the outputs of DNN change according to the sampled training set. As the gradients are usually the only information transferred from training sets to models during the optimization of DNN, we need to measure the variance of a DNN introduced by the gradients calculated from different training batches. More specifically, we’d like to develop a metric to reflect the function robustness of DNNs to sampling noises. If the function captured by a DNN drastically varies w.r.t. different training batches, its generalization error is very likely to be poor due to a large variance introduced by the optimization procedure. A similar metric is the sharpness of local minima proposed by Keskar et al. (2017), which measures the robustness of local minima as an indicator of the generalization error. However, this metric is only meaningful for local minima and hence cannot be applied in the entire optimization process.
Mathematically, for a sample (x, t) ∼ D, let f(x;θ) be the logit output of a DNN with parameter θ. Let TB ∼ Dm be a training batch with m samples, g : TB → R|θ| the optimizer outputting the update of θ based on TB . Then, we can get the function distribution Fx(TB) over a training batch TB , i.e., f(x;θ + g(TB)) ∼ Fx(TB). The variance of Fx(TB) reflects the model diversity caused by different training batches. The formal definition of OV is given below. Definition 1 (Optimization Variance (OV)) Given an input x and model parameters θq at the q-th training epoch, the OV on x at the q-th epoch is defined as
OVq(x) , ETB
[ ‖f(x;θq + g(TB))− ETBf(x;θq + g(TB))‖ 2 2 ] ETB [ ‖f(x;θq + g(TB))‖22
] . (6) Note that OVq(x) measures the relative variance, because the denominator in (6) eliminates the influence of the logit’s norm. In this way, OVq(x) at different training phases can be compared. The motivation here comes from the definition of coefficient of variation5 (CV) in probability theory and statistics, which is also known as the relative standard deviation. CV is defined as the ratio between the standard deviation and the mean, and is independent of the unit in which the measurement is taken. Therefore, CV enables comparing the relative diversity between two different measurements.
In terms of OV, the variance of logits, i.e., the numerator of OV, is not comparable across epochs due to the influence of their norm. In fact, even if the variance of logits maintains the same during the whole optimization process, its influence on the decision boundary is limited when the logits are large. Consequently, by treating the norm of logits as the measurement unit, following CV we set OV to ∑ i σ 2 i / ∑ i µ 2 i , where σi and µi represent the standard deviation and the mean of the i-th logit, respectively. If we remove the denominator, the value of OV will no longer has the indication ability for generalization error, especially at the early stage of the optimization process.
5https://en.wikipedia.org/wiki/Coefficient_of_variation
Intuitively, the OV represents the inconsistency of gradients’ influence on the model. If OVq(x) is very large, then the models trained with different sampled TB may have distinct outputs for the same input, leading to high model diversity and hence large variance. Note that here we emphasize the inconsistency of model updates rather than the gradients themselves. The latter can be measured by the gradient variance. The gradient variance and the OV are different, because sometimes diverse gradients may lead to similar changes of the function represented by DNN, and hence small OV. More on the relationship between the two variances can be found in Appendix B.
3.2 EXPERIMENTAL RESULTS
We calculated the expectation of the OV over x, i.e., Ex[OVq(x)], which was estimated from 1,000 random training samples. The test set was not involved at all.
Figure 2 shows how the test accuracy (solid curves) and Ex[OVq(x)] (dashed curves) change with the number of training epochs. Though sometimes the OV may not exhibit clear epoch-wise double descent, e.g., VGG16 in Figure 2(c), the symmetry between the solid and dashed curves generally exist, suggesting that the OV, which is calculated from the training set only, is capable of predicting the variation of the test accuracy. Similar results can also be observed using different optimizers and learning rates (see Appendix I).
Note that epoch-wise double descent is not a necessary condition for applying OV. As shown in Figure 2 (blue curves), we compared the values of OV and generalization errors of DNNs when there are 0% label noises, from which we can see the curves of generalization errors have no epochwise double descent, yet the proposed OV still works pretty well.
OVq(x) in Figure 2 was estimated on all training batches; however, this may not be necessary: a small number of training batches are usually enough. To demonstrate this, we trained ResNet and VGG on several datasets using Adam optimizer with learning rate 0.0001, and estimated OVq(x) from different number of training batches. The results in Figure 3 show that we can well estimate the OV using as few as 10 training batches.
Another intriguing finding is that even unstable variations of the test accuracy can be reflected by the OV. This correspondence is clearer on simpler datasets, e.g., MNIST (LeCun et al., 1998) and FashionMNIST (Xiao et al., 2017). Figure 4 shows the test accuracy and OV for LeNet-5 (LeCun et al., 1998) trained on MNIST and FashionMNIST without label noise. Spikes of the OV and the test accuracy happen simultaneously at the same epoch.
Our experimental results demonstrate that the generalization ability of a DNN can be indicated by the OV during stochastic training, without using a validation set. This phenomenon can be used to determine the early stopping point, and beyond.
3.3 EARLY STOPPING WITHOUT A VALIDATION SET
The common process to train a DNN involves three steps: 1) partition the dataset into a training set and a validation set; 2) use the training set to optimize the DNN parameters, and the validation set to determine when to stop training, i.e., early stopping, and record the early stopping point; 3) train the DNN on the entire dataset (combination of training and validation sets) for the same number of epochs. However, there is no guarantee that the early stopping point on the training set is the same as the one on the entire dataset. So, an interesting questions is: is it possible to directly perform early stopping on the entire dataset, without a validation set?
The OV can be used for this purpose. For more robust performance, instead of using the OV directly, we may need to smooth it to alleviate random fluctuations.
As an example, we smoothed the OV by a moving average filter of 10 epochs, and then performed early stopping on the smoothed OV with a patience of 10 epochs. As a reference, early stopping with the same patience was also performed directly on the test accuracy to get the groundtruth. However, it should be noted that the latter is unknown in real-world applications. It is provided for verification purpose only.
We trained different DNN models on several datasets (SVHN: VGG11 and ResNet18; CIFAR10: VGG13 and ResNet18; CIFAR100: VGG16 and ResNet34) with different levels of label noise (10% and 20%) and optimizers (Adam with learning rate 0.001 and 0.0001, SGD with momentum 0.9 and learning rate 0.01 and 0.001). Then, we compared the groundtruth early stopping point and the test accuracy with those found by performing early stopping on the OV6. The results are shown in Figure 5. The true early stopping points and those found from the OV curve were generally close, though there were some exceptions, e.g., the point near (40, 100) in Figure 5(a). However, the test errors, which are what a model designer really cares about, were always close.
3.4 SMALL SIZE OF THE TRAINING SET
For large datasets, a validation set can be easily partitioned from the training set without hurting the generalization performance. Therefore, OV is more useful in the case of small datasets. To this end, we performed experiments with small numbers (2000, 4000, 6000) of the training samples in CIFAR10 to verify the effectiveness of OV in this situation. Considering the limited numbers of training samples, we trained a small Convolution Neural Network (CNN) using Adam optimizer with learning rate 0.0001, whose detailed information can be found in Appendix H.
The experimental results are shown in Figure 6. It can be observed that: a) When the size of the training set is small, OV still correlates well with the generalization performance as a function of the training epochs, which ver-
ifies the validity of our results on small datasets; b) As expected, more training samples usually lead to better generalization performance, which can also be reflected by comparing the values of OV.
6Training VGG11 on SVHN with Adam optimizer and learning rate 0.001 was unstable (see Appendix D), so we did not include its results in Figure 5.
3.5 NETWORK SIZE
In addition to indicating the early stopping point, the OV can also explain some other generalization behaviors, such as the influence of the network size. To verify that, we trained ResNet18 with different network sizes on CIFAR10 for 100 epochs with no label noise, using Adam optimizer with learning rate 0.0001. For each convolutional layer, we set the number of filters k/4 (k = 1, 2, ..., 8) times the number of filters in the original model. We then examined the OV of ResNet18 with different network sizes to validate its correlation with the test accuracy. Note that we used SGD optimizer with learning rate 0.001 and no momentum to calculate the OV, so that the cumulative influence during training can be removed to make the comparison more fair.
The results are shown in Figure 7. As k increases, the OV gradually decreases, i.e., the diversity of model updates introduced by different training batches decreases when widening ResNet18, suggesting that increasing the network size can improve the model’s resilience to sampling noise, which leads to better generalization performance. The Pearson correlation coefficient between the OV and the test accuracy reached −0.94 (p = 0.0006). Lastly, we need to point out that we did not observe a strong cross-model correlation between the OV and the test accuracy when comparing the generalization ability of significantly different model architectures, e.g., VGG
and ResNet. Our future research will look for a more universal cross-model metric to illustrate the generalization performance.
4 CONCLUSIONS
This paper has shown that the variance dominates the epoch-wise double descent, and highly correlates with the test error. Inspired by this finding, we proposed a novel metric called optimization variance, which is calculated from the training set only but powerful enough to predict how the test error changes during training. Based on this metric, we further proposed an approach to perform early stopping without any validation set. Remarkably, we demonstrated that the training set itself may be enough to predict the generalization ability of a DNN, without a dedicated validation set.
Our future work will: 1) apply the OV to other tasks, such as regression problems, unsupervised learning, and so on; 2) figure out the cause of the second descent of the OV; and, 3) design regularization approaches to penalize the OV for better generalization performance.
A BIAS-VARIANCE DECOMPOSITION FOR DIFFERENT LOSS FUNCTIONS
This section presents detailed deduction of bias-variance decomposition for different loss functions.
A.1 THE MEAN SQUARED ERROR (MSE) LOSS
For the MSE loss, we have L(t,y) = ‖t−y‖22, and need to calculate ȳ based on (2). We first ignore the constraints and solve the following problem:
ỹ = arg min y∗
ET [‖y∗ − y‖22], (7)
whose solution is ỹ = ET y. It can be easily verified that ỹ satisfies the constraints in (2), and hence ȳ = ỹ = ET y.
Then, we can decompose the MSE loss as: ET [‖t− y‖22] = ET [‖t− ȳ + ȳ − y‖22]
= ET [‖t− ȳ‖22 + ‖ȳ − y‖22 + 2(t− ȳ)T (ȳ − y)] = ‖t− ȳ‖22 + ET [‖ȳ − y‖22] + 0, (8)
where the first term denotes the bias, and the second denotes the variance. We can also get β = 1.
A.2 CROSS-ENTROPY (CE) LOSS For L(t,y) = ∑c k=1 tk log tk yk
, ȳ can be obtained by applying the Lagrange multiplier method (Boyd & Vandenberghe, 2004) to (2):
l(y∗, λ) = ET
[ c∑
k=1
y∗k log y∗k yk
] + λ · ( 1−
c∑ k=1 y∗k
) . (9)
To minimize l(y∗, λ), we need to compute its partial derivatives to y∗ and λ: ∂l
∂y∗k = ET
[ log
y∗k yk + 1
] − λ, k = 1, 2, ..., c
∂l ∂λ = 1− c∑ k=1 y∗k.
By setting these derivatives to 0, we get:
ȳk = 1
Z exp{ET [log yk]}, k = 1, 2, ..., c (10) where Z = ∑c k=1 exp{ET [log yk]} is a normalization constant independent of k.
Because
ET
[ c∑
k=1
αk log ȳk yk
] = − logZ, ∀αk c∑ k=1 αk = 1, (11)
we have:
ET
[ c∑
k=1
tk log tk yk
] = ET [ c∑
k=1
tk
( log
tk ȳk + log ȳk yk
)]
= c∑ k=1 tk log tk ȳk + ET
[ c∑
k=1
tk log ȳk yk
]
= c∑ k=1 tk log tk ȳk − logZ
= c∑ k=1 tk log tk ȳk + ET
[ c∑
k=1
ȳk log ȳk yk
] , (12)
from which we obtain β = 1.
A.3 THE ZERO-ONE (ZO) LOSS
For the ZO loss, i.e., L(t,y) = 1con{H(t) 6= H(y)}, ȳ is the voting result, i.e., H(ET [H(y)]), so that the variance can be minimized. However, the value of β depends on the relationship between ȳ and t.
When ȳ = t, we have:
ET [1con{H(t) 6= H(y)}] = 0 + ET [1con{H(ȳ) 6= H(y)}] = 1con{H(t) 6= H(ȳ)}+ ET [1con{H(ȳ) 6= H(y)}] , (13)
clearly, β = 1.
When ȳ 6= t, we have:
ET [1con{H(t) 6= H(y)}] = PT (H(y) 6= t) = 1− PT (H(y) = t) = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t
∣∣H(y) = ȳ)PT (H(y) = ȳ) − PT ( H(y) = t
∣∣H(y) 6= ȳ)PT (H(y) 6= ȳ) . (14) Since ȳ 6= H(t), it follows that
PT ( H(y) = t ∣∣H(y) = ȳ) = 0. (15) Then, (14) becomes:
ET [1con{H(t) 6= H(y)}] = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t ∣∣H(y) 6= ȳ)PT (H(y) 6= ȳ) = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t
∣∣H(y) 6= ȳ)ET [1con{H(ȳ) 6= H(y)}] , (16) hence, β = −PT ( H(y) = t
∣∣H(y) 6= ȳ).
B CONNECTIONS BETWEEN THE OPTIMIZATION VARIANCE AND THE
GRADIENT VARIANCE
This section shows the connection between the gradient variance and the optimization variance in Definition 1.
For simplicity, we ignore q in OVq(x) and denote g(TB)−ETBg(TB) by g̃(TB). Then, the gradient variance Vg can be written as:
Vg = ETB [ ‖g(TB)− ETBg(TB)‖ 2 2 ] = ETB [ g̃(TB)T g̃(TB) ] . (17)
Denote the Jacobian matrix of the logits f(x;θ) w.r.t. θ by Jθ(x), i.e.,
Jθ(x) = [∇θf1(x;θ),∇θf2(x;θ), ...,∇θfc(x;θ)] , (18)
where fj(x;θ) is the j-th entry of f(x;θ), and c is the number of classes.
Using first order approximation, we have:
f(x;θ + g(TB)) ≈ f(x;θ) + Jθ(x)T g(TB), (19)
and OV (x) can be written as:
OV (x) ≈ ETB
[ g̃(TB)TJθ(x)Jθ(x)T g̃(TB) ] f(x;θ)T f(x;θ) + ETB [O (‖g(TB)‖2)] ≈ ETB [ g̃(TB)TJθ(x)Jθ(x)T g̃(TB) ] f(x;θ)T f(x;θ) . (20)
The only difference between Ex [OV (x)] and Vg is the middle weight matrix Ex [ Jθ(x)Jθ(x) T
f(x;θ)T f(x;θ)
] .
This suggests that penalizing the gradient variance can also reduce the optimization variance.
Figure 8 presents the curves of Vg in the training procedure. It can be observed that Vg also shows some ability to indicate the generalization performance. However, compared with the results in Figure 2, we can see that OV demonstrates a stronger power for indicating the generalization error than Vg . More importantly, Vg loses its comparability when the network size increases, while OV
can be more reliable to architectural changes with the middle weight matrix Ex [ Jθ(x)Jθ(x) T
f(x;θ)T f(x;θ)
] to
normalize Vg , which is illustrated in Figure 7.
We also notice that ‖ETBg(TB)‖ 2 2 is usually far less than ETB ‖g(TB)‖ 2 2, hence Vg and the gradient norm ETB ‖g(TB)‖ 2 2 almost present the same curves in the training procedure.
VGG11
VGG13
VGG16
C BEHAVIORS OF DIFFERENT LOSS FUNCTIONS
Many different loss functions can be used to evaluate the test performance of a model. They may have very different behaviors w.r.t. the training epochs. As shown in Figure 9, the epoch-wise double descent can be very conspicuous on test error, i.e., the ZO loss, but barely observable on CE and MSE losses, which increase after the early stopping point. This is because at the late stage of training, model outputs approach 0 or 1, resulting in the increase of the CE and MSE losses on the misclassified test samples, though the decision boundary may be barely changed. When rescaling the weights of the last layer by a positive real number, the ZO loss remains the same because of the untouched decision boundary, whereas the CE and MSE losses are changed. Thus, we perform bias-variance decomposition on the ZO loss to study epoch-wise double descent.
D VGG11 ON SVHN BY ADAM OPTIMIZER WITH LEARNING RATE 0.001
Training VGG11 on SVHN by Adam optimizer with learning rate 0.001 is unstable, as shown in Figure 14(a). Figure 10 shows the test error and optimization variance. For 0% and 10% label noise, the test error stays large (the test accuracy is low) for a long period in the early phase of training. The optimization variance is also abnormal.
E LOSS, BIAS AND VARIANCE W.R.T. DIFFERENT LEVELS OF LABEL NOISE
Label noise makes epoch-wise double descent more conspicuous to observe (Nakkiran et al., 2020). If the variance is the major cause of double descent, it should match the variation of the test error
when adding different levels of label noise. Figure 11 shows an example to compare the loss, variance, and bias w.r.t. different levels of label noise. Though label noise impacts both the bias and the variance, the latter appears to be more sensitive and shows better synchronization with the loss. For instance, when we randomly shuffle a small percentage of labels, say 10%, a valley clearly occurs between 20 and 50 epoches for the variance, whereas it is less obvious for the bias. In addition, it seems that the level of label noise does not affect the epoch at which the loss reaches its first minimum. This is surprising, because the label noise is considered highly related to the complexity of the dataset. Our future work will explore the role label noise plays in the generalization of DNNs.
Loss
Var
Bias
F BIAS AND VARIANCE TERMS W.R.T. DIFFERENT LOSS FUNCTIONS
G BIAS AND VARIANCE TERMS W.R.T. DIFFERENT OPTIMIZERS AND LEARNING RATES
H DETAILED INFORMATION OF THE SMALL CNN MODEL
We present the detailed information of the architecture trained with small numbers of training samples. It consists of two convolutional layers and two fully-connected layers, as shown in Table 2.
I OPTIMIZATION VARIANCE AND TEST ACCURACY W.R.T. DIFFERENT OPTIMIZERS AND LEARNING RATES | 1. What is the focus of the paper, and what does it aim to achieve?
2. What is the proposed metric in the paper, and how does it relate to the generalization error?
3. What are the concerns regarding the methodology used in the paper?
4. How does the proposed metric differ from other simpler metrics, such as the gradient magnitude?
5. What are the limitations of the experimental analysis in the paper?
6. Are there any issues with the presentation of the results, such as the captions and legends in Figure 5? | Review | Review
This paper proposes the metric of estimating the variance of gradients during optimization of deep networks and empirically found that this metric correlates with test errors, which may indicate a good point for performing early stopping during training.
My first thinking is that the metric is indeed interesting and informative if, it could be applied to indicates the generalization on the test set, of course. Then my concern rises from the methodology used in this paper, as using 'a correlation' between optimization variance and generalization error to indicate a 'causal relation' between these two numbers. Granted, it might be difficult to actually prove it. From a big picture, it would still be nice to see a comparison analysis with other simpler metrics, such as the gradient magnitude itself?
The definition of "optimization variance" seems to be weird, as in equation 6, the nominator scales with gradient magnitudes, and the denominator tends to have very smaller changes after some iterations, comparably. So why is this OV so different from the gradient norms as to constitute a novel metric? At least from experiments, the changes in denominator do not matter that much.
The dataset is kind of simple in the empirical analysis. It would be better if Imagenet size level ones or regression tasks are also performed.
Small questions:
Figure 3, what is 'different number of training batches'? is that batch size?
Figure 4 shows some strong upheaval in terms of accuracy, is that from a large gradient that somehow explodes during training?
Figure 5 has really unreadable captions and legend notations. |
ICLR | Title
Optimization Variance: Exploring Generalization Properties of DNNs
Abstract
Unlike the conventional wisdom in statistical learning theory, the test error of a deep neural network (DNN) often demonstrates double descent: as the model complexity increases, it first follows a classical U-shaped curve and then shows a second descent. Through bias-variance decomposition, recent studies revealed that the bell-shaped variance is the major cause of model-wise double descent (when the DNN is widened gradually). This paper investigates epoch-wise double descent, i.e., the test error of a DNN also shows double descent as the number of training epoches increases. Specifically, we extend the bias-variance analysis to epoch-wise double descent, and reveal that the variance also contributes the most to the zero-one loss, as in model-wise double descent. Inspired by this result, we propose a novel metric, optimization variance (OV), to measure the diversity of model updates caused by the stochastic gradients of random training batches drawn in the same iteration. OV can be estimated using samples from the training set only but correlates well with the (unknown) test error. It can be used to predict the generalization ability of a DNN when the zero-one loss is used in test, and hence early stopping may be achieved without using a validation set.
1 INTRODUCTION
Deep Neural Networks (DNNs) usually have large model capacity, but also generalize well. This violates the conventional VC dimension (Vapnik, 1999) or Rademacher complexity theory (ShalevShwartz & Ben-David, 2014), inspiring new designs of network architectures (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016; Zagoruyko & Komodakis, 2016) and reconsideration of their optimization and generalization (Zhang et al., 2017; Arpit et al., 2017; Wang et al., 2018; Kalimeris et al., 2019; Rahaman et al., 2019; Allen-Zhu et al., 2019).
Model-wise double descent, i.e., as a DNN’s model complexity increases, its test error first shows a classical U-shaped curve and then enters a second descent, has been observed on many machine learning models (Advani & Saxe, 2017; Belkin et al., 2019a; Geiger et al., 2019; Maddox et al., 2020; Nakkiran et al., 2020). Multiple studies provided theoretical evidence of this phenomenon in some tractable settings (Mitra, 2019; Hastie et al., 2019; Belkin et al., 2019b; Yang et al., 2020; Bartlett et al., 2020; Muthukumar et al., 2020). Specifically, Neal et al. (2018) and Yang et al. (2020) performed bias-variance decomposition for mean squared error (MSE) and the cross-entropy (CE) loss, and empirically revealed that the bell-shaped curve of the variance is the major cause of modelwise double descent. Maddox et al. (2020) proposed to measure the effective dimensionality of the parameter space, which can be further used to explain model-wise double descent.
Recently, a new double descent phenomenon, epoch-wise double descent, was observed, when increasing the number of training epochs instead of the model complexity1 (Nakkiran et al., 2020). Compared with model-wise double descent, epoch-wise double descent is relatively less explored. Heckel & Yilmaz (2020) showed that epoch-wise double descent occurs in the situation where different parts of DNNs are learned at different epochs, which can be eliminated by proper scaling of step sizes. Zhang & Wu (2020) discovered that the energy ratio of the high-frequency components of a DNN’s prediction landscape, which can reflect the model capacity, switches from increase to
1In practice, some label noise is often added to the training set to make the epoch-wise double descent more conspicuous.
decrease at a certain training epoch, leading to the second descent of the test error. However, this metric fails to provide further information on generalization, such as the early stopping point, or how the size of a DNN influences its performance.
This paper utilizes bias-variance decomposition of the zero-one (ZO) loss (CE loss is still used in training) to further investigate epoch-wise double descent. By monitoring the behaviors of the bias and the variance, we find that the variance plays an important role in epoch-wise double descent, which dominates and highly correlates with the variation of the test error.
Though the variance correlates well with the test error, estimating its value requires training models on multiple different training sets drawn from the same data distribution, whereas in practice usually only one training set is available2. Inspired by the fact that the source of variance comes from the random-sampled training sets, we propose a novel metric, optimization variance (OV), to measure the diversity of model updates caused by the stochastic gradients of random training batches drawn in the same iteration. This metric can be estimated from a single model using samples drawn from the training set only. More importantly, it correlates well with the test error, and thus can be used to determine the early stopping point in DNN training, without using any validation set.
Some complexity measures have been proposed to illustrate the generalization ability of DNNs, such as sharpness (Keskar et al., 2017) and norm-based measures (Neyshabur et al., 2015). However, their values rely heavily on the model parameters, making comparisons across different models very difficult. Dinh et al. (2017) shows that by re-parameterizing a DNN, one can alter the sharpness of its searched local minima without affecting the function it represents; Neyshabur et al. (2018) shows that these measures cannot explain the generalization behaviors when the size of a DNN increases. Our proposed metric, which only requires the logit outputs of a DNN, is less dependent on model parameters, and hence can explain many generalization behaviors, e.g., the test error decreases as the network size increases. Chatterji et al. (2020) proposed a metric called Model Criticality that can explain the superior generalization performance of some architectures over others, yet it is unexplored that whether this metric can be used to indicate generalization in the entire training procedure, especially for some relatively complex generalization behaviors, such as epoch-wise double descent.
To summarize, our contributions are:
• We perform bias-variance decomposition on the test error to explore epoch-wise double descent. We show that the variance dominates the variation of the test classification error.
• We propose a novel metric, OV, which is calculated from the training set only and correlates well with the test classification error.
• Based on the OV, we propose an approach to search for the early stopping point without using a validation set, when the zero-one loss is used in test. Experiments verified its effectiveness.
The remainder of this paper is organized as follows: Section 2 introduces the details of tracing bias and variance over training epochs. Section 3 proposes the OV and demonstrates its ability to indicate the test behaviors. Section 4 draws conclusions and points out some future research directions.
2 BIAS AND VARIANCE IN EPOCH-WISE DOUBLE DESCENT
This section presents the details of tracing the bias and the variance during training. We show that the variance dominates the epoch-wise double descent of the test error.
2.1 A UNIFIED BIAS-VARIANCE DECOMPOSITION
Bias-variance decomposition is widely used to analyze the generalization properties of machine learning algorithms (Geman et al., 1992; Friedman et al., 2001). It was originally proposed for
2Assume the training set has n samples. We can partition it into multiple smaller training sets, each with m samples (m < n), and then train multiple models. However, the variance estimated from this case would be different from the one estimated from training sets with n samples. We can also bootstrap the original training set into multiple ones, each with n samples. However, the data distribution of each bootstrap replica is different from the original training set, and hence the estimated variance would also be different.
the MSE loss and later extended to other loss functions, e.g., CE and ZO losses (Kong & Dietterich, 1995; Tibshirani, 1996; Kohavi et al., 1996; Heskes, 1998). Our study utilizes a unified bias-variance decomposition that was proposed by Domingos (2000) and applicable to arbitrary loss functions.
Let (x, t) be a sample drawn from the data distribution D, where x ∈ Rd denotes the ddimensional input, and t ∈ Rc the one-hot encoding of the label in c classes. The training set T = {(xi, ti)}ni=1 ∼ Dn is utilized to train the model f : Rd → Rc. Let y = f(x; T ) ∈ Rc be the probability output of the model f trained on T , and L(t,y) the loss function. The expected loss ET [L(t,y)] should be small to ensure that the model both accurately captures the regularities in its training data, and also generalizes well to unseen data.
According to Domingos (2000), a unified bias-variance decomposition3 of ET [L(y, t)] is:
ET [L(t,y)] = L(t, ȳ)︸ ︷︷ ︸ Bias +β ET [L(ȳ,y)]︸ ︷︷ ︸ Variance , (1)
where β takes different values for different loss functions, and ȳ is the expected output:
ȳ = arg min y∗∈Rc ∣∣∑c k=1 y ∗ k=1,y ∗ k≥0 ET [L(y∗,y)]. (2)
ȳ minimizes the variance term in (1), which can be regarded as the “center" or “ensemble" of y w.r.t. different T . Table 1 shows specific forms of L, ȳ, and β for different loss functions (the detailed derivations can be found in Appendix A). This paper focuses on the bias-variance decomposition of the ZO loss, because epoch-wise double descent of the test error is more obvious when the ZO loss is used (see Appendix C). To capture the overall bias and variance, we analyzed Ex,tET [L(t,y)], i.e., the expectation of ET [L(t,y)] over the distribution D.
2.2 TRACE THE BIAS AND VARIANCE TERMS OVER TRAINING EPOCHS
To trace the bias term Ex,t[L(t, ȳ)] and the variance term Ex,tET [L(ȳ,y)] w.r.t. the training epoch, we need to sample several training sets and train models on them respectively, so that the bias and variance terms can be estimated from them.
Concretely, let T ∗ denote the test set, f(x; Tj , q) the model f trained on Tj ∼ Dn (j = 1, 2, ...,K) for q epochs. Then, the estimated bias and variance terms at the q-th epoch, denoted as B(q) and V (q), respectively, can be written as:
B(q) = E(x,t)∈T ∗ [ L ( t, f̄(x; q) )] , (3)
V (q) = E(x,t)∈T ∗ 1 K K∑ j=1 L(f̄(x; q), f(x; Tj , q)) , (4) 3In real-world situations, the expected loss consists of three terms: bias, variance, and noise. Similar to
Yang et al. (2020), we view t as the groundtruth and ignore the noise term.
where
f̄(x; q) = H K∑ j=1 H(f(x; Tj , q)) , (5) is the voting result of {f(x; Tj , q)}Kj=1.
We should emphasize that, in real-world situations, D cannot be obtained, hence Tj in our experiments was randomly sampled from the training set (we sampled 50% training data for each Tj). As a result, despite of showing the cause of epoch-wise double descent, the behaviors of bias and variance may be different when the whole training set is used.
We considered ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) models4 trained on SVHN (Netzer et al., 2011), CIFAR10 (Krizhevsky, 2009), and CIFAR100 (Krizhevsky, 2009). SGD and Adam (Kingma & Ba, 2014) optimizers with different learning rates were used. The batchsize was set to 128, and all models were trained for 250 epochs with data augmentation. Prior to sampling {Tj}Kj=1 (K = 5) from the training set, 20% labels of the training data were randomly shuffled to introduce epoch-wise double descent.
Figure 1 shows the expected ZO loss and its bias and variance. The bias descends rapidly at first and then generally converges to a low value, whereas the variance behaves almost exactly the same as the test error, mimicking even small fluctuations of the test error. To stabilize that, we performed additional experiments with different optimizers, learning rates, and levels of label noise (see Appendices E and G). All experimental results demonstrated that it is mainly the variance that contributes to epoch-wise double descent.
2.3 DISCUSSION
Contradicting to the traditional view that the variance keeps increasing because of overfitting, our experimental results show a more complex behavior: the variance starts high and then decreases rapidly, followed by a bell curve. The difference at the beginning (when the number of epochs is small) is mainly due to the choice of loss functions (see experimental results of bias-variance decomposition for MSE and CE losses in Appendix F). CE and MSE losses, analyzed in the traditional
4Adapted from https://github.com/kuangliu/pytorch-cifar
learning theory, can reflect the degree of difference of probabilities, whereas the ZO loss only the labels. At the early stage of training, the output probabilities are close to random guesses, and hence a small difference in probabilities may lead to completely different labels, resulting in the distinct variance for different loss functions. However, the reason why the variance begins to diminish at the late phase of training is still unclear. We will explore this problem in our future research.
3 OPTIMIZATION VARIANCE (OV)
This section proposes a new metric, OV, to measure the diversity of model updates introduced by random training batches during optimization. This metric can indicate test behaviors without any validation set.
3.1 NOTATION AND DEFINITION
Section 2 verified the synchronization between the test error and the variance, but its application is limited because estimating the variance requires: 1) a test set, and, 2) models trained on different training sets drawn from the same data distribution. It’d be desirable to capture the test behavior of a DNN using a single training set only, without a test set.
According to the definition in (1), the variance measures the model diversity caused by different training samples drawn from the same distribution, i.e., the outputs of DNN change according to the sampled training set. As the gradients are usually the only information transferred from training sets to models during the optimization of DNN, we need to measure the variance of a DNN introduced by the gradients calculated from different training batches. More specifically, we’d like to develop a metric to reflect the function robustness of DNNs to sampling noises. If the function captured by a DNN drastically varies w.r.t. different training batches, its generalization error is very likely to be poor due to a large variance introduced by the optimization procedure. A similar metric is the sharpness of local minima proposed by Keskar et al. (2017), which measures the robustness of local minima as an indicator of the generalization error. However, this metric is only meaningful for local minima and hence cannot be applied in the entire optimization process.
Mathematically, for a sample (x, t) ∼ D, let f(x;θ) be the logit output of a DNN with parameter θ. Let TB ∼ Dm be a training batch with m samples, g : TB → R|θ| the optimizer outputting the update of θ based on TB . Then, we can get the function distribution Fx(TB) over a training batch TB , i.e., f(x;θ + g(TB)) ∼ Fx(TB). The variance of Fx(TB) reflects the model diversity caused by different training batches. The formal definition of OV is given below. Definition 1 (Optimization Variance (OV)) Given an input x and model parameters θq at the q-th training epoch, the OV on x at the q-th epoch is defined as
OVq(x) , ETB
[ ‖f(x;θq + g(TB))− ETBf(x;θq + g(TB))‖ 2 2 ] ETB [ ‖f(x;θq + g(TB))‖22
] . (6) Note that OVq(x) measures the relative variance, because the denominator in (6) eliminates the influence of the logit’s norm. In this way, OVq(x) at different training phases can be compared. The motivation here comes from the definition of coefficient of variation5 (CV) in probability theory and statistics, which is also known as the relative standard deviation. CV is defined as the ratio between the standard deviation and the mean, and is independent of the unit in which the measurement is taken. Therefore, CV enables comparing the relative diversity between two different measurements.
In terms of OV, the variance of logits, i.e., the numerator of OV, is not comparable across epochs due to the influence of their norm. In fact, even if the variance of logits maintains the same during the whole optimization process, its influence on the decision boundary is limited when the logits are large. Consequently, by treating the norm of logits as the measurement unit, following CV we set OV to ∑ i σ 2 i / ∑ i µ 2 i , where σi and µi represent the standard deviation and the mean of the i-th logit, respectively. If we remove the denominator, the value of OV will no longer has the indication ability for generalization error, especially at the early stage of the optimization process.
5https://en.wikipedia.org/wiki/Coefficient_of_variation
Intuitively, the OV represents the inconsistency of gradients’ influence on the model. If OVq(x) is very large, then the models trained with different sampled TB may have distinct outputs for the same input, leading to high model diversity and hence large variance. Note that here we emphasize the inconsistency of model updates rather than the gradients themselves. The latter can be measured by the gradient variance. The gradient variance and the OV are different, because sometimes diverse gradients may lead to similar changes of the function represented by DNN, and hence small OV. More on the relationship between the two variances can be found in Appendix B.
3.2 EXPERIMENTAL RESULTS
We calculated the expectation of the OV over x, i.e., Ex[OVq(x)], which was estimated from 1,000 random training samples. The test set was not involved at all.
Figure 2 shows how the test accuracy (solid curves) and Ex[OVq(x)] (dashed curves) change with the number of training epochs. Though sometimes the OV may not exhibit clear epoch-wise double descent, e.g., VGG16 in Figure 2(c), the symmetry between the solid and dashed curves generally exist, suggesting that the OV, which is calculated from the training set only, is capable of predicting the variation of the test accuracy. Similar results can also be observed using different optimizers and learning rates (see Appendix I).
Note that epoch-wise double descent is not a necessary condition for applying OV. As shown in Figure 2 (blue curves), we compared the values of OV and generalization errors of DNNs when there are 0% label noises, from which we can see the curves of generalization errors have no epochwise double descent, yet the proposed OV still works pretty well.
OVq(x) in Figure 2 was estimated on all training batches; however, this may not be necessary: a small number of training batches are usually enough. To demonstrate this, we trained ResNet and VGG on several datasets using Adam optimizer with learning rate 0.0001, and estimated OVq(x) from different number of training batches. The results in Figure 3 show that we can well estimate the OV using as few as 10 training batches.
Another intriguing finding is that even unstable variations of the test accuracy can be reflected by the OV. This correspondence is clearer on simpler datasets, e.g., MNIST (LeCun et al., 1998) and FashionMNIST (Xiao et al., 2017). Figure 4 shows the test accuracy and OV for LeNet-5 (LeCun et al., 1998) trained on MNIST and FashionMNIST without label noise. Spikes of the OV and the test accuracy happen simultaneously at the same epoch.
Our experimental results demonstrate that the generalization ability of a DNN can be indicated by the OV during stochastic training, without using a validation set. This phenomenon can be used to determine the early stopping point, and beyond.
3.3 EARLY STOPPING WITHOUT A VALIDATION SET
The common process to train a DNN involves three steps: 1) partition the dataset into a training set and a validation set; 2) use the training set to optimize the DNN parameters, and the validation set to determine when to stop training, i.e., early stopping, and record the early stopping point; 3) train the DNN on the entire dataset (combination of training and validation sets) for the same number of epochs. However, there is no guarantee that the early stopping point on the training set is the same as the one on the entire dataset. So, an interesting questions is: is it possible to directly perform early stopping on the entire dataset, without a validation set?
The OV can be used for this purpose. For more robust performance, instead of using the OV directly, we may need to smooth it to alleviate random fluctuations.
As an example, we smoothed the OV by a moving average filter of 10 epochs, and then performed early stopping on the smoothed OV with a patience of 10 epochs. As a reference, early stopping with the same patience was also performed directly on the test accuracy to get the groundtruth. However, it should be noted that the latter is unknown in real-world applications. It is provided for verification purpose only.
We trained different DNN models on several datasets (SVHN: VGG11 and ResNet18; CIFAR10: VGG13 and ResNet18; CIFAR100: VGG16 and ResNet34) with different levels of label noise (10% and 20%) and optimizers (Adam with learning rate 0.001 and 0.0001, SGD with momentum 0.9 and learning rate 0.01 and 0.001). Then, we compared the groundtruth early stopping point and the test accuracy with those found by performing early stopping on the OV6. The results are shown in Figure 5. The true early stopping points and those found from the OV curve were generally close, though there were some exceptions, e.g., the point near (40, 100) in Figure 5(a). However, the test errors, which are what a model designer really cares about, were always close.
3.4 SMALL SIZE OF THE TRAINING SET
For large datasets, a validation set can be easily partitioned from the training set without hurting the generalization performance. Therefore, OV is more useful in the case of small datasets. To this end, we performed experiments with small numbers (2000, 4000, 6000) of the training samples in CIFAR10 to verify the effectiveness of OV in this situation. Considering the limited numbers of training samples, we trained a small Convolution Neural Network (CNN) using Adam optimizer with learning rate 0.0001, whose detailed information can be found in Appendix H.
The experimental results are shown in Figure 6. It can be observed that: a) When the size of the training set is small, OV still correlates well with the generalization performance as a function of the training epochs, which ver-
ifies the validity of our results on small datasets; b) As expected, more training samples usually lead to better generalization performance, which can also be reflected by comparing the values of OV.
6Training VGG11 on SVHN with Adam optimizer and learning rate 0.001 was unstable (see Appendix D), so we did not include its results in Figure 5.
3.5 NETWORK SIZE
In addition to indicating the early stopping point, the OV can also explain some other generalization behaviors, such as the influence of the network size. To verify that, we trained ResNet18 with different network sizes on CIFAR10 for 100 epochs with no label noise, using Adam optimizer with learning rate 0.0001. For each convolutional layer, we set the number of filters k/4 (k = 1, 2, ..., 8) times the number of filters in the original model. We then examined the OV of ResNet18 with different network sizes to validate its correlation with the test accuracy. Note that we used SGD optimizer with learning rate 0.001 and no momentum to calculate the OV, so that the cumulative influence during training can be removed to make the comparison more fair.
The results are shown in Figure 7. As k increases, the OV gradually decreases, i.e., the diversity of model updates introduced by different training batches decreases when widening ResNet18, suggesting that increasing the network size can improve the model’s resilience to sampling noise, which leads to better generalization performance. The Pearson correlation coefficient between the OV and the test accuracy reached −0.94 (p = 0.0006). Lastly, we need to point out that we did not observe a strong cross-model correlation between the OV and the test accuracy when comparing the generalization ability of significantly different model architectures, e.g., VGG
and ResNet. Our future research will look for a more universal cross-model metric to illustrate the generalization performance.
4 CONCLUSIONS
This paper has shown that the variance dominates the epoch-wise double descent, and highly correlates with the test error. Inspired by this finding, we proposed a novel metric called optimization variance, which is calculated from the training set only but powerful enough to predict how the test error changes during training. Based on this metric, we further proposed an approach to perform early stopping without any validation set. Remarkably, we demonstrated that the training set itself may be enough to predict the generalization ability of a DNN, without a dedicated validation set.
Our future work will: 1) apply the OV to other tasks, such as regression problems, unsupervised learning, and so on; 2) figure out the cause of the second descent of the OV; and, 3) design regularization approaches to penalize the OV for better generalization performance.
A BIAS-VARIANCE DECOMPOSITION FOR DIFFERENT LOSS FUNCTIONS
This section presents detailed deduction of bias-variance decomposition for different loss functions.
A.1 THE MEAN SQUARED ERROR (MSE) LOSS
For the MSE loss, we have L(t,y) = ‖t−y‖22, and need to calculate ȳ based on (2). We first ignore the constraints and solve the following problem:
ỹ = arg min y∗
ET [‖y∗ − y‖22], (7)
whose solution is ỹ = ET y. It can be easily verified that ỹ satisfies the constraints in (2), and hence ȳ = ỹ = ET y.
Then, we can decompose the MSE loss as: ET [‖t− y‖22] = ET [‖t− ȳ + ȳ − y‖22]
= ET [‖t− ȳ‖22 + ‖ȳ − y‖22 + 2(t− ȳ)T (ȳ − y)] = ‖t− ȳ‖22 + ET [‖ȳ − y‖22] + 0, (8)
where the first term denotes the bias, and the second denotes the variance. We can also get β = 1.
A.2 CROSS-ENTROPY (CE) LOSS For L(t,y) = ∑c k=1 tk log tk yk
, ȳ can be obtained by applying the Lagrange multiplier method (Boyd & Vandenberghe, 2004) to (2):
l(y∗, λ) = ET
[ c∑
k=1
y∗k log y∗k yk
] + λ · ( 1−
c∑ k=1 y∗k
) . (9)
To minimize l(y∗, λ), we need to compute its partial derivatives to y∗ and λ: ∂l
∂y∗k = ET
[ log
y∗k yk + 1
] − λ, k = 1, 2, ..., c
∂l ∂λ = 1− c∑ k=1 y∗k.
By setting these derivatives to 0, we get:
ȳk = 1
Z exp{ET [log yk]}, k = 1, 2, ..., c (10) where Z = ∑c k=1 exp{ET [log yk]} is a normalization constant independent of k.
Because
ET
[ c∑
k=1
αk log ȳk yk
] = − logZ, ∀αk c∑ k=1 αk = 1, (11)
we have:
ET
[ c∑
k=1
tk log tk yk
] = ET [ c∑
k=1
tk
( log
tk ȳk + log ȳk yk
)]
= c∑ k=1 tk log tk ȳk + ET
[ c∑
k=1
tk log ȳk yk
]
= c∑ k=1 tk log tk ȳk − logZ
= c∑ k=1 tk log tk ȳk + ET
[ c∑
k=1
ȳk log ȳk yk
] , (12)
from which we obtain β = 1.
A.3 THE ZERO-ONE (ZO) LOSS
For the ZO loss, i.e., L(t,y) = 1con{H(t) 6= H(y)}, ȳ is the voting result, i.e., H(ET [H(y)]), so that the variance can be minimized. However, the value of β depends on the relationship between ȳ and t.
When ȳ = t, we have:
ET [1con{H(t) 6= H(y)}] = 0 + ET [1con{H(ȳ) 6= H(y)}] = 1con{H(t) 6= H(ȳ)}+ ET [1con{H(ȳ) 6= H(y)}] , (13)
clearly, β = 1.
When ȳ 6= t, we have:
ET [1con{H(t) 6= H(y)}] = PT (H(y) 6= t) = 1− PT (H(y) = t) = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t
∣∣H(y) = ȳ)PT (H(y) = ȳ) − PT ( H(y) = t
∣∣H(y) 6= ȳ)PT (H(y) 6= ȳ) . (14) Since ȳ 6= H(t), it follows that
PT ( H(y) = t ∣∣H(y) = ȳ) = 0. (15) Then, (14) becomes:
ET [1con{H(t) 6= H(y)}] = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t ∣∣H(y) 6= ȳ)PT (H(y) 6= ȳ) = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t
∣∣H(y) 6= ȳ)ET [1con{H(ȳ) 6= H(y)}] , (16) hence, β = −PT ( H(y) = t
∣∣H(y) 6= ȳ).
B CONNECTIONS BETWEEN THE OPTIMIZATION VARIANCE AND THE
GRADIENT VARIANCE
This section shows the connection between the gradient variance and the optimization variance in Definition 1.
For simplicity, we ignore q in OVq(x) and denote g(TB)−ETBg(TB) by g̃(TB). Then, the gradient variance Vg can be written as:
Vg = ETB [ ‖g(TB)− ETBg(TB)‖ 2 2 ] = ETB [ g̃(TB)T g̃(TB) ] . (17)
Denote the Jacobian matrix of the logits f(x;θ) w.r.t. θ by Jθ(x), i.e.,
Jθ(x) = [∇θf1(x;θ),∇θf2(x;θ), ...,∇θfc(x;θ)] , (18)
where fj(x;θ) is the j-th entry of f(x;θ), and c is the number of classes.
Using first order approximation, we have:
f(x;θ + g(TB)) ≈ f(x;θ) + Jθ(x)T g(TB), (19)
and OV (x) can be written as:
OV (x) ≈ ETB
[ g̃(TB)TJθ(x)Jθ(x)T g̃(TB) ] f(x;θ)T f(x;θ) + ETB [O (‖g(TB)‖2)] ≈ ETB [ g̃(TB)TJθ(x)Jθ(x)T g̃(TB) ] f(x;θ)T f(x;θ) . (20)
The only difference between Ex [OV (x)] and Vg is the middle weight matrix Ex [ Jθ(x)Jθ(x) T
f(x;θ)T f(x;θ)
] .
This suggests that penalizing the gradient variance can also reduce the optimization variance.
Figure 8 presents the curves of Vg in the training procedure. It can be observed that Vg also shows some ability to indicate the generalization performance. However, compared with the results in Figure 2, we can see that OV demonstrates a stronger power for indicating the generalization error than Vg . More importantly, Vg loses its comparability when the network size increases, while OV
can be more reliable to architectural changes with the middle weight matrix Ex [ Jθ(x)Jθ(x) T
f(x;θ)T f(x;θ)
] to
normalize Vg , which is illustrated in Figure 7.
We also notice that ‖ETBg(TB)‖ 2 2 is usually far less than ETB ‖g(TB)‖ 2 2, hence Vg and the gradient norm ETB ‖g(TB)‖ 2 2 almost present the same curves in the training procedure.
VGG11
VGG13
VGG16
C BEHAVIORS OF DIFFERENT LOSS FUNCTIONS
Many different loss functions can be used to evaluate the test performance of a model. They may have very different behaviors w.r.t. the training epochs. As shown in Figure 9, the epoch-wise double descent can be very conspicuous on test error, i.e., the ZO loss, but barely observable on CE and MSE losses, which increase after the early stopping point. This is because at the late stage of training, model outputs approach 0 or 1, resulting in the increase of the CE and MSE losses on the misclassified test samples, though the decision boundary may be barely changed. When rescaling the weights of the last layer by a positive real number, the ZO loss remains the same because of the untouched decision boundary, whereas the CE and MSE losses are changed. Thus, we perform bias-variance decomposition on the ZO loss to study epoch-wise double descent.
D VGG11 ON SVHN BY ADAM OPTIMIZER WITH LEARNING RATE 0.001
Training VGG11 on SVHN by Adam optimizer with learning rate 0.001 is unstable, as shown in Figure 14(a). Figure 10 shows the test error and optimization variance. For 0% and 10% label noise, the test error stays large (the test accuracy is low) for a long period in the early phase of training. The optimization variance is also abnormal.
E LOSS, BIAS AND VARIANCE W.R.T. DIFFERENT LEVELS OF LABEL NOISE
Label noise makes epoch-wise double descent more conspicuous to observe (Nakkiran et al., 2020). If the variance is the major cause of double descent, it should match the variation of the test error
when adding different levels of label noise. Figure 11 shows an example to compare the loss, variance, and bias w.r.t. different levels of label noise. Though label noise impacts both the bias and the variance, the latter appears to be more sensitive and shows better synchronization with the loss. For instance, when we randomly shuffle a small percentage of labels, say 10%, a valley clearly occurs between 20 and 50 epoches for the variance, whereas it is less obvious for the bias. In addition, it seems that the level of label noise does not affect the epoch at which the loss reaches its first minimum. This is surprising, because the label noise is considered highly related to the complexity of the dataset. Our future work will explore the role label noise plays in the generalization of DNNs.
Loss
Var
Bias
F BIAS AND VARIANCE TERMS W.R.T. DIFFERENT LOSS FUNCTIONS
G BIAS AND VARIANCE TERMS W.R.T. DIFFERENT OPTIMIZERS AND LEARNING RATES
H DETAILED INFORMATION OF THE SMALL CNN MODEL
We present the detailed information of the architecture trained with small numbers of training samples. It consists of two convolutional layers and two fully-connected layers, as shown in Table 2.
I OPTIMIZATION VARIANCE AND TEST ACCURACY W.R.T. DIFFERENT OPTIMIZERS AND LEARNING RATES | 1. What is the novel approach proposed by the authors in this paper?
2. How does the proposed method differ from traditional methods in terms of its reliance on the validation set?
3. What is the key insight or concept introduced by the authors in this paper?
4. How do the experimental results support or validate the proposed approach?
5. Are there any limitations or potential drawbacks to the proposed method?
6. How might the proposed approach be extended or applied to other scenarios or domains? | Review | Review
Having a stopping rule without the validation set is intriguing, especially for datasets with a low number of samples. The authors propose a rule that doesn't require the validation dataset, i.e. it is solely based on training data. It introduces the notion of optimization variance which is different from the variance of gradients. I give them credit for the idea and also the analytical comparison with the variance of gradients.
On the one hand, the experiments in Section 3.3 are geared towards showing the role of the validation dataset, but on the other hand they lack rigor. It would be interesting to learn the impact at different levels of the percentage of the samples in validation. I also think the work can be impactful on small datasets. While they experiment with large-scale datasets, experiments with even smaller datasets and different validation set proportions would be of great interest. |
ICLR | Title
Optimization Variance: Exploring Generalization Properties of DNNs
Abstract
Unlike the conventional wisdom in statistical learning theory, the test error of a deep neural network (DNN) often demonstrates double descent: as the model complexity increases, it first follows a classical U-shaped curve and then shows a second descent. Through bias-variance decomposition, recent studies revealed that the bell-shaped variance is the major cause of model-wise double descent (when the DNN is widened gradually). This paper investigates epoch-wise double descent, i.e., the test error of a DNN also shows double descent as the number of training epoches increases. Specifically, we extend the bias-variance analysis to epoch-wise double descent, and reveal that the variance also contributes the most to the zero-one loss, as in model-wise double descent. Inspired by this result, we propose a novel metric, optimization variance (OV), to measure the diversity of model updates caused by the stochastic gradients of random training batches drawn in the same iteration. OV can be estimated using samples from the training set only but correlates well with the (unknown) test error. It can be used to predict the generalization ability of a DNN when the zero-one loss is used in test, and hence early stopping may be achieved without using a validation set.
1 INTRODUCTION
Deep Neural Networks (DNNs) usually have large model capacity, but also generalize well. This violates the conventional VC dimension (Vapnik, 1999) or Rademacher complexity theory (ShalevShwartz & Ben-David, 2014), inspiring new designs of network architectures (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016; Zagoruyko & Komodakis, 2016) and reconsideration of their optimization and generalization (Zhang et al., 2017; Arpit et al., 2017; Wang et al., 2018; Kalimeris et al., 2019; Rahaman et al., 2019; Allen-Zhu et al., 2019).
Model-wise double descent, i.e., as a DNN’s model complexity increases, its test error first shows a classical U-shaped curve and then enters a second descent, has been observed on many machine learning models (Advani & Saxe, 2017; Belkin et al., 2019a; Geiger et al., 2019; Maddox et al., 2020; Nakkiran et al., 2020). Multiple studies provided theoretical evidence of this phenomenon in some tractable settings (Mitra, 2019; Hastie et al., 2019; Belkin et al., 2019b; Yang et al., 2020; Bartlett et al., 2020; Muthukumar et al., 2020). Specifically, Neal et al. (2018) and Yang et al. (2020) performed bias-variance decomposition for mean squared error (MSE) and the cross-entropy (CE) loss, and empirically revealed that the bell-shaped curve of the variance is the major cause of modelwise double descent. Maddox et al. (2020) proposed to measure the effective dimensionality of the parameter space, which can be further used to explain model-wise double descent.
Recently, a new double descent phenomenon, epoch-wise double descent, was observed, when increasing the number of training epochs instead of the model complexity1 (Nakkiran et al., 2020). Compared with model-wise double descent, epoch-wise double descent is relatively less explored. Heckel & Yilmaz (2020) showed that epoch-wise double descent occurs in the situation where different parts of DNNs are learned at different epochs, which can be eliminated by proper scaling of step sizes. Zhang & Wu (2020) discovered that the energy ratio of the high-frequency components of a DNN’s prediction landscape, which can reflect the model capacity, switches from increase to
1In practice, some label noise is often added to the training set to make the epoch-wise double descent more conspicuous.
decrease at a certain training epoch, leading to the second descent of the test error. However, this metric fails to provide further information on generalization, such as the early stopping point, or how the size of a DNN influences its performance.
This paper utilizes bias-variance decomposition of the zero-one (ZO) loss (CE loss is still used in training) to further investigate epoch-wise double descent. By monitoring the behaviors of the bias and the variance, we find that the variance plays an important role in epoch-wise double descent, which dominates and highly correlates with the variation of the test error.
Though the variance correlates well with the test error, estimating its value requires training models on multiple different training sets drawn from the same data distribution, whereas in practice usually only one training set is available2. Inspired by the fact that the source of variance comes from the random-sampled training sets, we propose a novel metric, optimization variance (OV), to measure the diversity of model updates caused by the stochastic gradients of random training batches drawn in the same iteration. This metric can be estimated from a single model using samples drawn from the training set only. More importantly, it correlates well with the test error, and thus can be used to determine the early stopping point in DNN training, without using any validation set.
Some complexity measures have been proposed to illustrate the generalization ability of DNNs, such as sharpness (Keskar et al., 2017) and norm-based measures (Neyshabur et al., 2015). However, their values rely heavily on the model parameters, making comparisons across different models very difficult. Dinh et al. (2017) shows that by re-parameterizing a DNN, one can alter the sharpness of its searched local minima without affecting the function it represents; Neyshabur et al. (2018) shows that these measures cannot explain the generalization behaviors when the size of a DNN increases. Our proposed metric, which only requires the logit outputs of a DNN, is less dependent on model parameters, and hence can explain many generalization behaviors, e.g., the test error decreases as the network size increases. Chatterji et al. (2020) proposed a metric called Model Criticality that can explain the superior generalization performance of some architectures over others, yet it is unexplored that whether this metric can be used to indicate generalization in the entire training procedure, especially for some relatively complex generalization behaviors, such as epoch-wise double descent.
To summarize, our contributions are:
• We perform bias-variance decomposition on the test error to explore epoch-wise double descent. We show that the variance dominates the variation of the test classification error.
• We propose a novel metric, OV, which is calculated from the training set only and correlates well with the test classification error.
• Based on the OV, we propose an approach to search for the early stopping point without using a validation set, when the zero-one loss is used in test. Experiments verified its effectiveness.
The remainder of this paper is organized as follows: Section 2 introduces the details of tracing bias and variance over training epochs. Section 3 proposes the OV and demonstrates its ability to indicate the test behaviors. Section 4 draws conclusions and points out some future research directions.
2 BIAS AND VARIANCE IN EPOCH-WISE DOUBLE DESCENT
This section presents the details of tracing the bias and the variance during training. We show that the variance dominates the epoch-wise double descent of the test error.
2.1 A UNIFIED BIAS-VARIANCE DECOMPOSITION
Bias-variance decomposition is widely used to analyze the generalization properties of machine learning algorithms (Geman et al., 1992; Friedman et al., 2001). It was originally proposed for
2Assume the training set has n samples. We can partition it into multiple smaller training sets, each with m samples (m < n), and then train multiple models. However, the variance estimated from this case would be different from the one estimated from training sets with n samples. We can also bootstrap the original training set into multiple ones, each with n samples. However, the data distribution of each bootstrap replica is different from the original training set, and hence the estimated variance would also be different.
the MSE loss and later extended to other loss functions, e.g., CE and ZO losses (Kong & Dietterich, 1995; Tibshirani, 1996; Kohavi et al., 1996; Heskes, 1998). Our study utilizes a unified bias-variance decomposition that was proposed by Domingos (2000) and applicable to arbitrary loss functions.
Let (x, t) be a sample drawn from the data distribution D, where x ∈ Rd denotes the ddimensional input, and t ∈ Rc the one-hot encoding of the label in c classes. The training set T = {(xi, ti)}ni=1 ∼ Dn is utilized to train the model f : Rd → Rc. Let y = f(x; T ) ∈ Rc be the probability output of the model f trained on T , and L(t,y) the loss function. The expected loss ET [L(t,y)] should be small to ensure that the model both accurately captures the regularities in its training data, and also generalizes well to unseen data.
According to Domingos (2000), a unified bias-variance decomposition3 of ET [L(y, t)] is:
ET [L(t,y)] = L(t, ȳ)︸ ︷︷ ︸ Bias +β ET [L(ȳ,y)]︸ ︷︷ ︸ Variance , (1)
where β takes different values for different loss functions, and ȳ is the expected output:
ȳ = arg min y∗∈Rc ∣∣∑c k=1 y ∗ k=1,y ∗ k≥0 ET [L(y∗,y)]. (2)
ȳ minimizes the variance term in (1), which can be regarded as the “center" or “ensemble" of y w.r.t. different T . Table 1 shows specific forms of L, ȳ, and β for different loss functions (the detailed derivations can be found in Appendix A). This paper focuses on the bias-variance decomposition of the ZO loss, because epoch-wise double descent of the test error is more obvious when the ZO loss is used (see Appendix C). To capture the overall bias and variance, we analyzed Ex,tET [L(t,y)], i.e., the expectation of ET [L(t,y)] over the distribution D.
2.2 TRACE THE BIAS AND VARIANCE TERMS OVER TRAINING EPOCHS
To trace the bias term Ex,t[L(t, ȳ)] and the variance term Ex,tET [L(ȳ,y)] w.r.t. the training epoch, we need to sample several training sets and train models on them respectively, so that the bias and variance terms can be estimated from them.
Concretely, let T ∗ denote the test set, f(x; Tj , q) the model f trained on Tj ∼ Dn (j = 1, 2, ...,K) for q epochs. Then, the estimated bias and variance terms at the q-th epoch, denoted as B(q) and V (q), respectively, can be written as:
B(q) = E(x,t)∈T ∗ [ L ( t, f̄(x; q) )] , (3)
V (q) = E(x,t)∈T ∗ 1 K K∑ j=1 L(f̄(x; q), f(x; Tj , q)) , (4) 3In real-world situations, the expected loss consists of three terms: bias, variance, and noise. Similar to
Yang et al. (2020), we view t as the groundtruth and ignore the noise term.
where
f̄(x; q) = H K∑ j=1 H(f(x; Tj , q)) , (5) is the voting result of {f(x; Tj , q)}Kj=1.
We should emphasize that, in real-world situations, D cannot be obtained, hence Tj in our experiments was randomly sampled from the training set (we sampled 50% training data for each Tj). As a result, despite of showing the cause of epoch-wise double descent, the behaviors of bias and variance may be different when the whole training set is used.
We considered ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) models4 trained on SVHN (Netzer et al., 2011), CIFAR10 (Krizhevsky, 2009), and CIFAR100 (Krizhevsky, 2009). SGD and Adam (Kingma & Ba, 2014) optimizers with different learning rates were used. The batchsize was set to 128, and all models were trained for 250 epochs with data augmentation. Prior to sampling {Tj}Kj=1 (K = 5) from the training set, 20% labels of the training data were randomly shuffled to introduce epoch-wise double descent.
Figure 1 shows the expected ZO loss and its bias and variance. The bias descends rapidly at first and then generally converges to a low value, whereas the variance behaves almost exactly the same as the test error, mimicking even small fluctuations of the test error. To stabilize that, we performed additional experiments with different optimizers, learning rates, and levels of label noise (see Appendices E and G). All experimental results demonstrated that it is mainly the variance that contributes to epoch-wise double descent.
2.3 DISCUSSION
Contradicting to the traditional view that the variance keeps increasing because of overfitting, our experimental results show a more complex behavior: the variance starts high and then decreases rapidly, followed by a bell curve. The difference at the beginning (when the number of epochs is small) is mainly due to the choice of loss functions (see experimental results of bias-variance decomposition for MSE and CE losses in Appendix F). CE and MSE losses, analyzed in the traditional
4Adapted from https://github.com/kuangliu/pytorch-cifar
learning theory, can reflect the degree of difference of probabilities, whereas the ZO loss only the labels. At the early stage of training, the output probabilities are close to random guesses, and hence a small difference in probabilities may lead to completely different labels, resulting in the distinct variance for different loss functions. However, the reason why the variance begins to diminish at the late phase of training is still unclear. We will explore this problem in our future research.
3 OPTIMIZATION VARIANCE (OV)
This section proposes a new metric, OV, to measure the diversity of model updates introduced by random training batches during optimization. This metric can indicate test behaviors without any validation set.
3.1 NOTATION AND DEFINITION
Section 2 verified the synchronization between the test error and the variance, but its application is limited because estimating the variance requires: 1) a test set, and, 2) models trained on different training sets drawn from the same data distribution. It’d be desirable to capture the test behavior of a DNN using a single training set only, without a test set.
According to the definition in (1), the variance measures the model diversity caused by different training samples drawn from the same distribution, i.e., the outputs of DNN change according to the sampled training set. As the gradients are usually the only information transferred from training sets to models during the optimization of DNN, we need to measure the variance of a DNN introduced by the gradients calculated from different training batches. More specifically, we’d like to develop a metric to reflect the function robustness of DNNs to sampling noises. If the function captured by a DNN drastically varies w.r.t. different training batches, its generalization error is very likely to be poor due to a large variance introduced by the optimization procedure. A similar metric is the sharpness of local minima proposed by Keskar et al. (2017), which measures the robustness of local minima as an indicator of the generalization error. However, this metric is only meaningful for local minima and hence cannot be applied in the entire optimization process.
Mathematically, for a sample (x, t) ∼ D, let f(x;θ) be the logit output of a DNN with parameter θ. Let TB ∼ Dm be a training batch with m samples, g : TB → R|θ| the optimizer outputting the update of θ based on TB . Then, we can get the function distribution Fx(TB) over a training batch TB , i.e., f(x;θ + g(TB)) ∼ Fx(TB). The variance of Fx(TB) reflects the model diversity caused by different training batches. The formal definition of OV is given below. Definition 1 (Optimization Variance (OV)) Given an input x and model parameters θq at the q-th training epoch, the OV on x at the q-th epoch is defined as
OVq(x) , ETB
[ ‖f(x;θq + g(TB))− ETBf(x;θq + g(TB))‖ 2 2 ] ETB [ ‖f(x;θq + g(TB))‖22
] . (6) Note that OVq(x) measures the relative variance, because the denominator in (6) eliminates the influence of the logit’s norm. In this way, OVq(x) at different training phases can be compared. The motivation here comes from the definition of coefficient of variation5 (CV) in probability theory and statistics, which is also known as the relative standard deviation. CV is defined as the ratio between the standard deviation and the mean, and is independent of the unit in which the measurement is taken. Therefore, CV enables comparing the relative diversity between two different measurements.
In terms of OV, the variance of logits, i.e., the numerator of OV, is not comparable across epochs due to the influence of their norm. In fact, even if the variance of logits maintains the same during the whole optimization process, its influence on the decision boundary is limited when the logits are large. Consequently, by treating the norm of logits as the measurement unit, following CV we set OV to ∑ i σ 2 i / ∑ i µ 2 i , where σi and µi represent the standard deviation and the mean of the i-th logit, respectively. If we remove the denominator, the value of OV will no longer has the indication ability for generalization error, especially at the early stage of the optimization process.
5https://en.wikipedia.org/wiki/Coefficient_of_variation
Intuitively, the OV represents the inconsistency of gradients’ influence on the model. If OVq(x) is very large, then the models trained with different sampled TB may have distinct outputs for the same input, leading to high model diversity and hence large variance. Note that here we emphasize the inconsistency of model updates rather than the gradients themselves. The latter can be measured by the gradient variance. The gradient variance and the OV are different, because sometimes diverse gradients may lead to similar changes of the function represented by DNN, and hence small OV. More on the relationship between the two variances can be found in Appendix B.
3.2 EXPERIMENTAL RESULTS
We calculated the expectation of the OV over x, i.e., Ex[OVq(x)], which was estimated from 1,000 random training samples. The test set was not involved at all.
Figure 2 shows how the test accuracy (solid curves) and Ex[OVq(x)] (dashed curves) change with the number of training epochs. Though sometimes the OV may not exhibit clear epoch-wise double descent, e.g., VGG16 in Figure 2(c), the symmetry between the solid and dashed curves generally exist, suggesting that the OV, which is calculated from the training set only, is capable of predicting the variation of the test accuracy. Similar results can also be observed using different optimizers and learning rates (see Appendix I).
Note that epoch-wise double descent is not a necessary condition for applying OV. As shown in Figure 2 (blue curves), we compared the values of OV and generalization errors of DNNs when there are 0% label noises, from which we can see the curves of generalization errors have no epochwise double descent, yet the proposed OV still works pretty well.
OVq(x) in Figure 2 was estimated on all training batches; however, this may not be necessary: a small number of training batches are usually enough. To demonstrate this, we trained ResNet and VGG on several datasets using Adam optimizer with learning rate 0.0001, and estimated OVq(x) from different number of training batches. The results in Figure 3 show that we can well estimate the OV using as few as 10 training batches.
Another intriguing finding is that even unstable variations of the test accuracy can be reflected by the OV. This correspondence is clearer on simpler datasets, e.g., MNIST (LeCun et al., 1998) and FashionMNIST (Xiao et al., 2017). Figure 4 shows the test accuracy and OV for LeNet-5 (LeCun et al., 1998) trained on MNIST and FashionMNIST without label noise. Spikes of the OV and the test accuracy happen simultaneously at the same epoch.
Our experimental results demonstrate that the generalization ability of a DNN can be indicated by the OV during stochastic training, without using a validation set. This phenomenon can be used to determine the early stopping point, and beyond.
3.3 EARLY STOPPING WITHOUT A VALIDATION SET
The common process to train a DNN involves three steps: 1) partition the dataset into a training set and a validation set; 2) use the training set to optimize the DNN parameters, and the validation set to determine when to stop training, i.e., early stopping, and record the early stopping point; 3) train the DNN on the entire dataset (combination of training and validation sets) for the same number of epochs. However, there is no guarantee that the early stopping point on the training set is the same as the one on the entire dataset. So, an interesting questions is: is it possible to directly perform early stopping on the entire dataset, without a validation set?
The OV can be used for this purpose. For more robust performance, instead of using the OV directly, we may need to smooth it to alleviate random fluctuations.
As an example, we smoothed the OV by a moving average filter of 10 epochs, and then performed early stopping on the smoothed OV with a patience of 10 epochs. As a reference, early stopping with the same patience was also performed directly on the test accuracy to get the groundtruth. However, it should be noted that the latter is unknown in real-world applications. It is provided for verification purpose only.
We trained different DNN models on several datasets (SVHN: VGG11 and ResNet18; CIFAR10: VGG13 and ResNet18; CIFAR100: VGG16 and ResNet34) with different levels of label noise (10% and 20%) and optimizers (Adam with learning rate 0.001 and 0.0001, SGD with momentum 0.9 and learning rate 0.01 and 0.001). Then, we compared the groundtruth early stopping point and the test accuracy with those found by performing early stopping on the OV6. The results are shown in Figure 5. The true early stopping points and those found from the OV curve were generally close, though there were some exceptions, e.g., the point near (40, 100) in Figure 5(a). However, the test errors, which are what a model designer really cares about, were always close.
3.4 SMALL SIZE OF THE TRAINING SET
For large datasets, a validation set can be easily partitioned from the training set without hurting the generalization performance. Therefore, OV is more useful in the case of small datasets. To this end, we performed experiments with small numbers (2000, 4000, 6000) of the training samples in CIFAR10 to verify the effectiveness of OV in this situation. Considering the limited numbers of training samples, we trained a small Convolution Neural Network (CNN) using Adam optimizer with learning rate 0.0001, whose detailed information can be found in Appendix H.
The experimental results are shown in Figure 6. It can be observed that: a) When the size of the training set is small, OV still correlates well with the generalization performance as a function of the training epochs, which ver-
ifies the validity of our results on small datasets; b) As expected, more training samples usually lead to better generalization performance, which can also be reflected by comparing the values of OV.
6Training VGG11 on SVHN with Adam optimizer and learning rate 0.001 was unstable (see Appendix D), so we did not include its results in Figure 5.
3.5 NETWORK SIZE
In addition to indicating the early stopping point, the OV can also explain some other generalization behaviors, such as the influence of the network size. To verify that, we trained ResNet18 with different network sizes on CIFAR10 for 100 epochs with no label noise, using Adam optimizer with learning rate 0.0001. For each convolutional layer, we set the number of filters k/4 (k = 1, 2, ..., 8) times the number of filters in the original model. We then examined the OV of ResNet18 with different network sizes to validate its correlation with the test accuracy. Note that we used SGD optimizer with learning rate 0.001 and no momentum to calculate the OV, so that the cumulative influence during training can be removed to make the comparison more fair.
The results are shown in Figure 7. As k increases, the OV gradually decreases, i.e., the diversity of model updates introduced by different training batches decreases when widening ResNet18, suggesting that increasing the network size can improve the model’s resilience to sampling noise, which leads to better generalization performance. The Pearson correlation coefficient between the OV and the test accuracy reached −0.94 (p = 0.0006). Lastly, we need to point out that we did not observe a strong cross-model correlation between the OV and the test accuracy when comparing the generalization ability of significantly different model architectures, e.g., VGG
and ResNet. Our future research will look for a more universal cross-model metric to illustrate the generalization performance.
4 CONCLUSIONS
This paper has shown that the variance dominates the epoch-wise double descent, and highly correlates with the test error. Inspired by this finding, we proposed a novel metric called optimization variance, which is calculated from the training set only but powerful enough to predict how the test error changes during training. Based on this metric, we further proposed an approach to perform early stopping without any validation set. Remarkably, we demonstrated that the training set itself may be enough to predict the generalization ability of a DNN, without a dedicated validation set.
Our future work will: 1) apply the OV to other tasks, such as regression problems, unsupervised learning, and so on; 2) figure out the cause of the second descent of the OV; and, 3) design regularization approaches to penalize the OV for better generalization performance.
A BIAS-VARIANCE DECOMPOSITION FOR DIFFERENT LOSS FUNCTIONS
This section presents detailed deduction of bias-variance decomposition for different loss functions.
A.1 THE MEAN SQUARED ERROR (MSE) LOSS
For the MSE loss, we have L(t,y) = ‖t−y‖22, and need to calculate ȳ based on (2). We first ignore the constraints and solve the following problem:
ỹ = arg min y∗
ET [‖y∗ − y‖22], (7)
whose solution is ỹ = ET y. It can be easily verified that ỹ satisfies the constraints in (2), and hence ȳ = ỹ = ET y.
Then, we can decompose the MSE loss as: ET [‖t− y‖22] = ET [‖t− ȳ + ȳ − y‖22]
= ET [‖t− ȳ‖22 + ‖ȳ − y‖22 + 2(t− ȳ)T (ȳ − y)] = ‖t− ȳ‖22 + ET [‖ȳ − y‖22] + 0, (8)
where the first term denotes the bias, and the second denotes the variance. We can also get β = 1.
A.2 CROSS-ENTROPY (CE) LOSS For L(t,y) = ∑c k=1 tk log tk yk
, ȳ can be obtained by applying the Lagrange multiplier method (Boyd & Vandenberghe, 2004) to (2):
l(y∗, λ) = ET
[ c∑
k=1
y∗k log y∗k yk
] + λ · ( 1−
c∑ k=1 y∗k
) . (9)
To minimize l(y∗, λ), we need to compute its partial derivatives to y∗ and λ: ∂l
∂y∗k = ET
[ log
y∗k yk + 1
] − λ, k = 1, 2, ..., c
∂l ∂λ = 1− c∑ k=1 y∗k.
By setting these derivatives to 0, we get:
ȳk = 1
Z exp{ET [log yk]}, k = 1, 2, ..., c (10) where Z = ∑c k=1 exp{ET [log yk]} is a normalization constant independent of k.
Because
ET
[ c∑
k=1
αk log ȳk yk
] = − logZ, ∀αk c∑ k=1 αk = 1, (11)
we have:
ET
[ c∑
k=1
tk log tk yk
] = ET [ c∑
k=1
tk
( log
tk ȳk + log ȳk yk
)]
= c∑ k=1 tk log tk ȳk + ET
[ c∑
k=1
tk log ȳk yk
]
= c∑ k=1 tk log tk ȳk − logZ
= c∑ k=1 tk log tk ȳk + ET
[ c∑
k=1
ȳk log ȳk yk
] , (12)
from which we obtain β = 1.
A.3 THE ZERO-ONE (ZO) LOSS
For the ZO loss, i.e., L(t,y) = 1con{H(t) 6= H(y)}, ȳ is the voting result, i.e., H(ET [H(y)]), so that the variance can be minimized. However, the value of β depends on the relationship between ȳ and t.
When ȳ = t, we have:
ET [1con{H(t) 6= H(y)}] = 0 + ET [1con{H(ȳ) 6= H(y)}] = 1con{H(t) 6= H(ȳ)}+ ET [1con{H(ȳ) 6= H(y)}] , (13)
clearly, β = 1.
When ȳ 6= t, we have:
ET [1con{H(t) 6= H(y)}] = PT (H(y) 6= t) = 1− PT (H(y) = t) = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t
∣∣H(y) = ȳ)PT (H(y) = ȳ) − PT ( H(y) = t
∣∣H(y) 6= ȳ)PT (H(y) 6= ȳ) . (14) Since ȳ 6= H(t), it follows that
PT ( H(y) = t ∣∣H(y) = ȳ) = 0. (15) Then, (14) becomes:
ET [1con{H(t) 6= H(y)}] = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t ∣∣H(y) 6= ȳ)PT (H(y) 6= ȳ) = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t
∣∣H(y) 6= ȳ)ET [1con{H(ȳ) 6= H(y)}] , (16) hence, β = −PT ( H(y) = t
∣∣H(y) 6= ȳ).
B CONNECTIONS BETWEEN THE OPTIMIZATION VARIANCE AND THE
GRADIENT VARIANCE
This section shows the connection between the gradient variance and the optimization variance in Definition 1.
For simplicity, we ignore q in OVq(x) and denote g(TB)−ETBg(TB) by g̃(TB). Then, the gradient variance Vg can be written as:
Vg = ETB [ ‖g(TB)− ETBg(TB)‖ 2 2 ] = ETB [ g̃(TB)T g̃(TB) ] . (17)
Denote the Jacobian matrix of the logits f(x;θ) w.r.t. θ by Jθ(x), i.e.,
Jθ(x) = [∇θf1(x;θ),∇θf2(x;θ), ...,∇θfc(x;θ)] , (18)
where fj(x;θ) is the j-th entry of f(x;θ), and c is the number of classes.
Using first order approximation, we have:
f(x;θ + g(TB)) ≈ f(x;θ) + Jθ(x)T g(TB), (19)
and OV (x) can be written as:
OV (x) ≈ ETB
[ g̃(TB)TJθ(x)Jθ(x)T g̃(TB) ] f(x;θ)T f(x;θ) + ETB [O (‖g(TB)‖2)] ≈ ETB [ g̃(TB)TJθ(x)Jθ(x)T g̃(TB) ] f(x;θ)T f(x;θ) . (20)
The only difference between Ex [OV (x)] and Vg is the middle weight matrix Ex [ Jθ(x)Jθ(x) T
f(x;θ)T f(x;θ)
] .
This suggests that penalizing the gradient variance can also reduce the optimization variance.
Figure 8 presents the curves of Vg in the training procedure. It can be observed that Vg also shows some ability to indicate the generalization performance. However, compared with the results in Figure 2, we can see that OV demonstrates a stronger power for indicating the generalization error than Vg . More importantly, Vg loses its comparability when the network size increases, while OV
can be more reliable to architectural changes with the middle weight matrix Ex [ Jθ(x)Jθ(x) T
f(x;θ)T f(x;θ)
] to
normalize Vg , which is illustrated in Figure 7.
We also notice that ‖ETBg(TB)‖ 2 2 is usually far less than ETB ‖g(TB)‖ 2 2, hence Vg and the gradient norm ETB ‖g(TB)‖ 2 2 almost present the same curves in the training procedure.
VGG11
VGG13
VGG16
C BEHAVIORS OF DIFFERENT LOSS FUNCTIONS
Many different loss functions can be used to evaluate the test performance of a model. They may have very different behaviors w.r.t. the training epochs. As shown in Figure 9, the epoch-wise double descent can be very conspicuous on test error, i.e., the ZO loss, but barely observable on CE and MSE losses, which increase after the early stopping point. This is because at the late stage of training, model outputs approach 0 or 1, resulting in the increase of the CE and MSE losses on the misclassified test samples, though the decision boundary may be barely changed. When rescaling the weights of the last layer by a positive real number, the ZO loss remains the same because of the untouched decision boundary, whereas the CE and MSE losses are changed. Thus, we perform bias-variance decomposition on the ZO loss to study epoch-wise double descent.
D VGG11 ON SVHN BY ADAM OPTIMIZER WITH LEARNING RATE 0.001
Training VGG11 on SVHN by Adam optimizer with learning rate 0.001 is unstable, as shown in Figure 14(a). Figure 10 shows the test error and optimization variance. For 0% and 10% label noise, the test error stays large (the test accuracy is low) for a long period in the early phase of training. The optimization variance is also abnormal.
E LOSS, BIAS AND VARIANCE W.R.T. DIFFERENT LEVELS OF LABEL NOISE
Label noise makes epoch-wise double descent more conspicuous to observe (Nakkiran et al., 2020). If the variance is the major cause of double descent, it should match the variation of the test error
when adding different levels of label noise. Figure 11 shows an example to compare the loss, variance, and bias w.r.t. different levels of label noise. Though label noise impacts both the bias and the variance, the latter appears to be more sensitive and shows better synchronization with the loss. For instance, when we randomly shuffle a small percentage of labels, say 10%, a valley clearly occurs between 20 and 50 epoches for the variance, whereas it is less obvious for the bias. In addition, it seems that the level of label noise does not affect the epoch at which the loss reaches its first minimum. This is surprising, because the label noise is considered highly related to the complexity of the dataset. Our future work will explore the role label noise plays in the generalization of DNNs.
Loss
Var
Bias
F BIAS AND VARIANCE TERMS W.R.T. DIFFERENT LOSS FUNCTIONS
G BIAS AND VARIANCE TERMS W.R.T. DIFFERENT OPTIMIZERS AND LEARNING RATES
H DETAILED INFORMATION OF THE SMALL CNN MODEL
We present the detailed information of the architecture trained with small numbers of training samples. It consists of two convolutional layers and two fully-connected layers, as shown in Table 2.
I OPTIMIZATION VARIANCE AND TEST ACCURACY W.R.T. DIFFERENT OPTIMIZERS AND LEARNING RATES | 1. What is the reviewer's opinion on the paper's idea and its potential significance?
2. What does the reviewer think is lacking in the paper in terms of theoretical insights?
3. How does the reviewer suggest improving the clarity of the paper's relationship between OV and generalization?
4. Does the reviewer think the idea presented in the paper is original?
5. Under what conditions would the paper's results be considered more convincing? | Review | Review
quality The idea is interesting, but the paper lacks theoretical insights. And it was difficult to find in the paper how OV is related to generalization.
clarity The paper needs to make it more clear how OV is related to generalization - is it always monotonically decreasing/increasing?
originality The idea seems new, but more theoretical insight is needed.
significance If the authors include more theoretical justification, the paper's results would be more significant. The current version is too empirical and not very convincing. |
ICLR | Title
Optimization Variance: Exploring Generalization Properties of DNNs
Abstract
Unlike the conventional wisdom in statistical learning theory, the test error of a deep neural network (DNN) often demonstrates double descent: as the model complexity increases, it first follows a classical U-shaped curve and then shows a second descent. Through bias-variance decomposition, recent studies revealed that the bell-shaped variance is the major cause of model-wise double descent (when the DNN is widened gradually). This paper investigates epoch-wise double descent, i.e., the test error of a DNN also shows double descent as the number of training epoches increases. Specifically, we extend the bias-variance analysis to epoch-wise double descent, and reveal that the variance also contributes the most to the zero-one loss, as in model-wise double descent. Inspired by this result, we propose a novel metric, optimization variance (OV), to measure the diversity of model updates caused by the stochastic gradients of random training batches drawn in the same iteration. OV can be estimated using samples from the training set only but correlates well with the (unknown) test error. It can be used to predict the generalization ability of a DNN when the zero-one loss is used in test, and hence early stopping may be achieved without using a validation set.
1 INTRODUCTION
Deep Neural Networks (DNNs) usually have large model capacity, but also generalize well. This violates the conventional VC dimension (Vapnik, 1999) or Rademacher complexity theory (ShalevShwartz & Ben-David, 2014), inspiring new designs of network architectures (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016; Zagoruyko & Komodakis, 2016) and reconsideration of their optimization and generalization (Zhang et al., 2017; Arpit et al., 2017; Wang et al., 2018; Kalimeris et al., 2019; Rahaman et al., 2019; Allen-Zhu et al., 2019).
Model-wise double descent, i.e., as a DNN’s model complexity increases, its test error first shows a classical U-shaped curve and then enters a second descent, has been observed on many machine learning models (Advani & Saxe, 2017; Belkin et al., 2019a; Geiger et al., 2019; Maddox et al., 2020; Nakkiran et al., 2020). Multiple studies provided theoretical evidence of this phenomenon in some tractable settings (Mitra, 2019; Hastie et al., 2019; Belkin et al., 2019b; Yang et al., 2020; Bartlett et al., 2020; Muthukumar et al., 2020). Specifically, Neal et al. (2018) and Yang et al. (2020) performed bias-variance decomposition for mean squared error (MSE) and the cross-entropy (CE) loss, and empirically revealed that the bell-shaped curve of the variance is the major cause of modelwise double descent. Maddox et al. (2020) proposed to measure the effective dimensionality of the parameter space, which can be further used to explain model-wise double descent.
Recently, a new double descent phenomenon, epoch-wise double descent, was observed, when increasing the number of training epochs instead of the model complexity1 (Nakkiran et al., 2020). Compared with model-wise double descent, epoch-wise double descent is relatively less explored. Heckel & Yilmaz (2020) showed that epoch-wise double descent occurs in the situation where different parts of DNNs are learned at different epochs, which can be eliminated by proper scaling of step sizes. Zhang & Wu (2020) discovered that the energy ratio of the high-frequency components of a DNN’s prediction landscape, which can reflect the model capacity, switches from increase to
1In practice, some label noise is often added to the training set to make the epoch-wise double descent more conspicuous.
decrease at a certain training epoch, leading to the second descent of the test error. However, this metric fails to provide further information on generalization, such as the early stopping point, or how the size of a DNN influences its performance.
This paper utilizes bias-variance decomposition of the zero-one (ZO) loss (CE loss is still used in training) to further investigate epoch-wise double descent. By monitoring the behaviors of the bias and the variance, we find that the variance plays an important role in epoch-wise double descent, which dominates and highly correlates with the variation of the test error.
Though the variance correlates well with the test error, estimating its value requires training models on multiple different training sets drawn from the same data distribution, whereas in practice usually only one training set is available2. Inspired by the fact that the source of variance comes from the random-sampled training sets, we propose a novel metric, optimization variance (OV), to measure the diversity of model updates caused by the stochastic gradients of random training batches drawn in the same iteration. This metric can be estimated from a single model using samples drawn from the training set only. More importantly, it correlates well with the test error, and thus can be used to determine the early stopping point in DNN training, without using any validation set.
Some complexity measures have been proposed to illustrate the generalization ability of DNNs, such as sharpness (Keskar et al., 2017) and norm-based measures (Neyshabur et al., 2015). However, their values rely heavily on the model parameters, making comparisons across different models very difficult. Dinh et al. (2017) shows that by re-parameterizing a DNN, one can alter the sharpness of its searched local minima without affecting the function it represents; Neyshabur et al. (2018) shows that these measures cannot explain the generalization behaviors when the size of a DNN increases. Our proposed metric, which only requires the logit outputs of a DNN, is less dependent on model parameters, and hence can explain many generalization behaviors, e.g., the test error decreases as the network size increases. Chatterji et al. (2020) proposed a metric called Model Criticality that can explain the superior generalization performance of some architectures over others, yet it is unexplored that whether this metric can be used to indicate generalization in the entire training procedure, especially for some relatively complex generalization behaviors, such as epoch-wise double descent.
To summarize, our contributions are:
• We perform bias-variance decomposition on the test error to explore epoch-wise double descent. We show that the variance dominates the variation of the test classification error.
• We propose a novel metric, OV, which is calculated from the training set only and correlates well with the test classification error.
• Based on the OV, we propose an approach to search for the early stopping point without using a validation set, when the zero-one loss is used in test. Experiments verified its effectiveness.
The remainder of this paper is organized as follows: Section 2 introduces the details of tracing bias and variance over training epochs. Section 3 proposes the OV and demonstrates its ability to indicate the test behaviors. Section 4 draws conclusions and points out some future research directions.
2 BIAS AND VARIANCE IN EPOCH-WISE DOUBLE DESCENT
This section presents the details of tracing the bias and the variance during training. We show that the variance dominates the epoch-wise double descent of the test error.
2.1 A UNIFIED BIAS-VARIANCE DECOMPOSITION
Bias-variance decomposition is widely used to analyze the generalization properties of machine learning algorithms (Geman et al., 1992; Friedman et al., 2001). It was originally proposed for
2Assume the training set has n samples. We can partition it into multiple smaller training sets, each with m samples (m < n), and then train multiple models. However, the variance estimated from this case would be different from the one estimated from training sets with n samples. We can also bootstrap the original training set into multiple ones, each with n samples. However, the data distribution of each bootstrap replica is different from the original training set, and hence the estimated variance would also be different.
the MSE loss and later extended to other loss functions, e.g., CE and ZO losses (Kong & Dietterich, 1995; Tibshirani, 1996; Kohavi et al., 1996; Heskes, 1998). Our study utilizes a unified bias-variance decomposition that was proposed by Domingos (2000) and applicable to arbitrary loss functions.
Let (x, t) be a sample drawn from the data distribution D, where x ∈ Rd denotes the ddimensional input, and t ∈ Rc the one-hot encoding of the label in c classes. The training set T = {(xi, ti)}ni=1 ∼ Dn is utilized to train the model f : Rd → Rc. Let y = f(x; T ) ∈ Rc be the probability output of the model f trained on T , and L(t,y) the loss function. The expected loss ET [L(t,y)] should be small to ensure that the model both accurately captures the regularities in its training data, and also generalizes well to unseen data.
According to Domingos (2000), a unified bias-variance decomposition3 of ET [L(y, t)] is:
ET [L(t,y)] = L(t, ȳ)︸ ︷︷ ︸ Bias +β ET [L(ȳ,y)]︸ ︷︷ ︸ Variance , (1)
where β takes different values for different loss functions, and ȳ is the expected output:
ȳ = arg min y∗∈Rc ∣∣∑c k=1 y ∗ k=1,y ∗ k≥0 ET [L(y∗,y)]. (2)
ȳ minimizes the variance term in (1), which can be regarded as the “center" or “ensemble" of y w.r.t. different T . Table 1 shows specific forms of L, ȳ, and β for different loss functions (the detailed derivations can be found in Appendix A). This paper focuses on the bias-variance decomposition of the ZO loss, because epoch-wise double descent of the test error is more obvious when the ZO loss is used (see Appendix C). To capture the overall bias and variance, we analyzed Ex,tET [L(t,y)], i.e., the expectation of ET [L(t,y)] over the distribution D.
2.2 TRACE THE BIAS AND VARIANCE TERMS OVER TRAINING EPOCHS
To trace the bias term Ex,t[L(t, ȳ)] and the variance term Ex,tET [L(ȳ,y)] w.r.t. the training epoch, we need to sample several training sets and train models on them respectively, so that the bias and variance terms can be estimated from them.
Concretely, let T ∗ denote the test set, f(x; Tj , q) the model f trained on Tj ∼ Dn (j = 1, 2, ...,K) for q epochs. Then, the estimated bias and variance terms at the q-th epoch, denoted as B(q) and V (q), respectively, can be written as:
B(q) = E(x,t)∈T ∗ [ L ( t, f̄(x; q) )] , (3)
V (q) = E(x,t)∈T ∗ 1 K K∑ j=1 L(f̄(x; q), f(x; Tj , q)) , (4) 3In real-world situations, the expected loss consists of three terms: bias, variance, and noise. Similar to
Yang et al. (2020), we view t as the groundtruth and ignore the noise term.
where
f̄(x; q) = H K∑ j=1 H(f(x; Tj , q)) , (5) is the voting result of {f(x; Tj , q)}Kj=1.
We should emphasize that, in real-world situations, D cannot be obtained, hence Tj in our experiments was randomly sampled from the training set (we sampled 50% training data for each Tj). As a result, despite of showing the cause of epoch-wise double descent, the behaviors of bias and variance may be different when the whole training set is used.
We considered ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) models4 trained on SVHN (Netzer et al., 2011), CIFAR10 (Krizhevsky, 2009), and CIFAR100 (Krizhevsky, 2009). SGD and Adam (Kingma & Ba, 2014) optimizers with different learning rates were used. The batchsize was set to 128, and all models were trained for 250 epochs with data augmentation. Prior to sampling {Tj}Kj=1 (K = 5) from the training set, 20% labels of the training data were randomly shuffled to introduce epoch-wise double descent.
Figure 1 shows the expected ZO loss and its bias and variance. The bias descends rapidly at first and then generally converges to a low value, whereas the variance behaves almost exactly the same as the test error, mimicking even small fluctuations of the test error. To stabilize that, we performed additional experiments with different optimizers, learning rates, and levels of label noise (see Appendices E and G). All experimental results demonstrated that it is mainly the variance that contributes to epoch-wise double descent.
2.3 DISCUSSION
Contradicting to the traditional view that the variance keeps increasing because of overfitting, our experimental results show a more complex behavior: the variance starts high and then decreases rapidly, followed by a bell curve. The difference at the beginning (when the number of epochs is small) is mainly due to the choice of loss functions (see experimental results of bias-variance decomposition for MSE and CE losses in Appendix F). CE and MSE losses, analyzed in the traditional
4Adapted from https://github.com/kuangliu/pytorch-cifar
learning theory, can reflect the degree of difference of probabilities, whereas the ZO loss only the labels. At the early stage of training, the output probabilities are close to random guesses, and hence a small difference in probabilities may lead to completely different labels, resulting in the distinct variance for different loss functions. However, the reason why the variance begins to diminish at the late phase of training is still unclear. We will explore this problem in our future research.
3 OPTIMIZATION VARIANCE (OV)
This section proposes a new metric, OV, to measure the diversity of model updates introduced by random training batches during optimization. This metric can indicate test behaviors without any validation set.
3.1 NOTATION AND DEFINITION
Section 2 verified the synchronization between the test error and the variance, but its application is limited because estimating the variance requires: 1) a test set, and, 2) models trained on different training sets drawn from the same data distribution. It’d be desirable to capture the test behavior of a DNN using a single training set only, without a test set.
According to the definition in (1), the variance measures the model diversity caused by different training samples drawn from the same distribution, i.e., the outputs of DNN change according to the sampled training set. As the gradients are usually the only information transferred from training sets to models during the optimization of DNN, we need to measure the variance of a DNN introduced by the gradients calculated from different training batches. More specifically, we’d like to develop a metric to reflect the function robustness of DNNs to sampling noises. If the function captured by a DNN drastically varies w.r.t. different training batches, its generalization error is very likely to be poor due to a large variance introduced by the optimization procedure. A similar metric is the sharpness of local minima proposed by Keskar et al. (2017), which measures the robustness of local minima as an indicator of the generalization error. However, this metric is only meaningful for local minima and hence cannot be applied in the entire optimization process.
Mathematically, for a sample (x, t) ∼ D, let f(x;θ) be the logit output of a DNN with parameter θ. Let TB ∼ Dm be a training batch with m samples, g : TB → R|θ| the optimizer outputting the update of θ based on TB . Then, we can get the function distribution Fx(TB) over a training batch TB , i.e., f(x;θ + g(TB)) ∼ Fx(TB). The variance of Fx(TB) reflects the model diversity caused by different training batches. The formal definition of OV is given below. Definition 1 (Optimization Variance (OV)) Given an input x and model parameters θq at the q-th training epoch, the OV on x at the q-th epoch is defined as
OVq(x) , ETB
[ ‖f(x;θq + g(TB))− ETBf(x;θq + g(TB))‖ 2 2 ] ETB [ ‖f(x;θq + g(TB))‖22
] . (6) Note that OVq(x) measures the relative variance, because the denominator in (6) eliminates the influence of the logit’s norm. In this way, OVq(x) at different training phases can be compared. The motivation here comes from the definition of coefficient of variation5 (CV) in probability theory and statistics, which is also known as the relative standard deviation. CV is defined as the ratio between the standard deviation and the mean, and is independent of the unit in which the measurement is taken. Therefore, CV enables comparing the relative diversity between two different measurements.
In terms of OV, the variance of logits, i.e., the numerator of OV, is not comparable across epochs due to the influence of their norm. In fact, even if the variance of logits maintains the same during the whole optimization process, its influence on the decision boundary is limited when the logits are large. Consequently, by treating the norm of logits as the measurement unit, following CV we set OV to ∑ i σ 2 i / ∑ i µ 2 i , where σi and µi represent the standard deviation and the mean of the i-th logit, respectively. If we remove the denominator, the value of OV will no longer has the indication ability for generalization error, especially at the early stage of the optimization process.
5https://en.wikipedia.org/wiki/Coefficient_of_variation
Intuitively, the OV represents the inconsistency of gradients’ influence on the model. If OVq(x) is very large, then the models trained with different sampled TB may have distinct outputs for the same input, leading to high model diversity and hence large variance. Note that here we emphasize the inconsistency of model updates rather than the gradients themselves. The latter can be measured by the gradient variance. The gradient variance and the OV are different, because sometimes diverse gradients may lead to similar changes of the function represented by DNN, and hence small OV. More on the relationship between the two variances can be found in Appendix B.
3.2 EXPERIMENTAL RESULTS
We calculated the expectation of the OV over x, i.e., Ex[OVq(x)], which was estimated from 1,000 random training samples. The test set was not involved at all.
Figure 2 shows how the test accuracy (solid curves) and Ex[OVq(x)] (dashed curves) change with the number of training epochs. Though sometimes the OV may not exhibit clear epoch-wise double descent, e.g., VGG16 in Figure 2(c), the symmetry between the solid and dashed curves generally exist, suggesting that the OV, which is calculated from the training set only, is capable of predicting the variation of the test accuracy. Similar results can also be observed using different optimizers and learning rates (see Appendix I).
Note that epoch-wise double descent is not a necessary condition for applying OV. As shown in Figure 2 (blue curves), we compared the values of OV and generalization errors of DNNs when there are 0% label noises, from which we can see the curves of generalization errors have no epochwise double descent, yet the proposed OV still works pretty well.
OVq(x) in Figure 2 was estimated on all training batches; however, this may not be necessary: a small number of training batches are usually enough. To demonstrate this, we trained ResNet and VGG on several datasets using Adam optimizer with learning rate 0.0001, and estimated OVq(x) from different number of training batches. The results in Figure 3 show that we can well estimate the OV using as few as 10 training batches.
Another intriguing finding is that even unstable variations of the test accuracy can be reflected by the OV. This correspondence is clearer on simpler datasets, e.g., MNIST (LeCun et al., 1998) and FashionMNIST (Xiao et al., 2017). Figure 4 shows the test accuracy and OV for LeNet-5 (LeCun et al., 1998) trained on MNIST and FashionMNIST without label noise. Spikes of the OV and the test accuracy happen simultaneously at the same epoch.
Our experimental results demonstrate that the generalization ability of a DNN can be indicated by the OV during stochastic training, without using a validation set. This phenomenon can be used to determine the early stopping point, and beyond.
3.3 EARLY STOPPING WITHOUT A VALIDATION SET
The common process to train a DNN involves three steps: 1) partition the dataset into a training set and a validation set; 2) use the training set to optimize the DNN parameters, and the validation set to determine when to stop training, i.e., early stopping, and record the early stopping point; 3) train the DNN on the entire dataset (combination of training and validation sets) for the same number of epochs. However, there is no guarantee that the early stopping point on the training set is the same as the one on the entire dataset. So, an interesting questions is: is it possible to directly perform early stopping on the entire dataset, without a validation set?
The OV can be used for this purpose. For more robust performance, instead of using the OV directly, we may need to smooth it to alleviate random fluctuations.
As an example, we smoothed the OV by a moving average filter of 10 epochs, and then performed early stopping on the smoothed OV with a patience of 10 epochs. As a reference, early stopping with the same patience was also performed directly on the test accuracy to get the groundtruth. However, it should be noted that the latter is unknown in real-world applications. It is provided for verification purpose only.
We trained different DNN models on several datasets (SVHN: VGG11 and ResNet18; CIFAR10: VGG13 and ResNet18; CIFAR100: VGG16 and ResNet34) with different levels of label noise (10% and 20%) and optimizers (Adam with learning rate 0.001 and 0.0001, SGD with momentum 0.9 and learning rate 0.01 and 0.001). Then, we compared the groundtruth early stopping point and the test accuracy with those found by performing early stopping on the OV6. The results are shown in Figure 5. The true early stopping points and those found from the OV curve were generally close, though there were some exceptions, e.g., the point near (40, 100) in Figure 5(a). However, the test errors, which are what a model designer really cares about, were always close.
3.4 SMALL SIZE OF THE TRAINING SET
For large datasets, a validation set can be easily partitioned from the training set without hurting the generalization performance. Therefore, OV is more useful in the case of small datasets. To this end, we performed experiments with small numbers (2000, 4000, 6000) of the training samples in CIFAR10 to verify the effectiveness of OV in this situation. Considering the limited numbers of training samples, we trained a small Convolution Neural Network (CNN) using Adam optimizer with learning rate 0.0001, whose detailed information can be found in Appendix H.
The experimental results are shown in Figure 6. It can be observed that: a) When the size of the training set is small, OV still correlates well with the generalization performance as a function of the training epochs, which ver-
ifies the validity of our results on small datasets; b) As expected, more training samples usually lead to better generalization performance, which can also be reflected by comparing the values of OV.
6Training VGG11 on SVHN with Adam optimizer and learning rate 0.001 was unstable (see Appendix D), so we did not include its results in Figure 5.
3.5 NETWORK SIZE
In addition to indicating the early stopping point, the OV can also explain some other generalization behaviors, such as the influence of the network size. To verify that, we trained ResNet18 with different network sizes on CIFAR10 for 100 epochs with no label noise, using Adam optimizer with learning rate 0.0001. For each convolutional layer, we set the number of filters k/4 (k = 1, 2, ..., 8) times the number of filters in the original model. We then examined the OV of ResNet18 with different network sizes to validate its correlation with the test accuracy. Note that we used SGD optimizer with learning rate 0.001 and no momentum to calculate the OV, so that the cumulative influence during training can be removed to make the comparison more fair.
The results are shown in Figure 7. As k increases, the OV gradually decreases, i.e., the diversity of model updates introduced by different training batches decreases when widening ResNet18, suggesting that increasing the network size can improve the model’s resilience to sampling noise, which leads to better generalization performance. The Pearson correlation coefficient between the OV and the test accuracy reached −0.94 (p = 0.0006). Lastly, we need to point out that we did not observe a strong cross-model correlation between the OV and the test accuracy when comparing the generalization ability of significantly different model architectures, e.g., VGG
and ResNet. Our future research will look for a more universal cross-model metric to illustrate the generalization performance.
4 CONCLUSIONS
This paper has shown that the variance dominates the epoch-wise double descent, and highly correlates with the test error. Inspired by this finding, we proposed a novel metric called optimization variance, which is calculated from the training set only but powerful enough to predict how the test error changes during training. Based on this metric, we further proposed an approach to perform early stopping without any validation set. Remarkably, we demonstrated that the training set itself may be enough to predict the generalization ability of a DNN, without a dedicated validation set.
Our future work will: 1) apply the OV to other tasks, such as regression problems, unsupervised learning, and so on; 2) figure out the cause of the second descent of the OV; and, 3) design regularization approaches to penalize the OV for better generalization performance.
A BIAS-VARIANCE DECOMPOSITION FOR DIFFERENT LOSS FUNCTIONS
This section presents detailed deduction of bias-variance decomposition for different loss functions.
A.1 THE MEAN SQUARED ERROR (MSE) LOSS
For the MSE loss, we have L(t,y) = ‖t−y‖22, and need to calculate ȳ based on (2). We first ignore the constraints and solve the following problem:
ỹ = arg min y∗
ET [‖y∗ − y‖22], (7)
whose solution is ỹ = ET y. It can be easily verified that ỹ satisfies the constraints in (2), and hence ȳ = ỹ = ET y.
Then, we can decompose the MSE loss as: ET [‖t− y‖22] = ET [‖t− ȳ + ȳ − y‖22]
= ET [‖t− ȳ‖22 + ‖ȳ − y‖22 + 2(t− ȳ)T (ȳ − y)] = ‖t− ȳ‖22 + ET [‖ȳ − y‖22] + 0, (8)
where the first term denotes the bias, and the second denotes the variance. We can also get β = 1.
A.2 CROSS-ENTROPY (CE) LOSS For L(t,y) = ∑c k=1 tk log tk yk
, ȳ can be obtained by applying the Lagrange multiplier method (Boyd & Vandenberghe, 2004) to (2):
l(y∗, λ) = ET
[ c∑
k=1
y∗k log y∗k yk
] + λ · ( 1−
c∑ k=1 y∗k
) . (9)
To minimize l(y∗, λ), we need to compute its partial derivatives to y∗ and λ: ∂l
∂y∗k = ET
[ log
y∗k yk + 1
] − λ, k = 1, 2, ..., c
∂l ∂λ = 1− c∑ k=1 y∗k.
By setting these derivatives to 0, we get:
ȳk = 1
Z exp{ET [log yk]}, k = 1, 2, ..., c (10) where Z = ∑c k=1 exp{ET [log yk]} is a normalization constant independent of k.
Because
ET
[ c∑
k=1
αk log ȳk yk
] = − logZ, ∀αk c∑ k=1 αk = 1, (11)
we have:
ET
[ c∑
k=1
tk log tk yk
] = ET [ c∑
k=1
tk
( log
tk ȳk + log ȳk yk
)]
= c∑ k=1 tk log tk ȳk + ET
[ c∑
k=1
tk log ȳk yk
]
= c∑ k=1 tk log tk ȳk − logZ
= c∑ k=1 tk log tk ȳk + ET
[ c∑
k=1
ȳk log ȳk yk
] , (12)
from which we obtain β = 1.
A.3 THE ZERO-ONE (ZO) LOSS
For the ZO loss, i.e., L(t,y) = 1con{H(t) 6= H(y)}, ȳ is the voting result, i.e., H(ET [H(y)]), so that the variance can be minimized. However, the value of β depends on the relationship between ȳ and t.
When ȳ = t, we have:
ET [1con{H(t) 6= H(y)}] = 0 + ET [1con{H(ȳ) 6= H(y)}] = 1con{H(t) 6= H(ȳ)}+ ET [1con{H(ȳ) 6= H(y)}] , (13)
clearly, β = 1.
When ȳ 6= t, we have:
ET [1con{H(t) 6= H(y)}] = PT (H(y) 6= t) = 1− PT (H(y) = t) = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t
∣∣H(y) = ȳ)PT (H(y) = ȳ) − PT ( H(y) = t
∣∣H(y) 6= ȳ)PT (H(y) 6= ȳ) . (14) Since ȳ 6= H(t), it follows that
PT ( H(y) = t ∣∣H(y) = ȳ) = 0. (15) Then, (14) becomes:
ET [1con{H(t) 6= H(y)}] = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t ∣∣H(y) 6= ȳ)PT (H(y) 6= ȳ) = 1con{H(t) 6= H(ȳ)} − PT ( H(y) = t
∣∣H(y) 6= ȳ)ET [1con{H(ȳ) 6= H(y)}] , (16) hence, β = −PT ( H(y) = t
∣∣H(y) 6= ȳ).
B CONNECTIONS BETWEEN THE OPTIMIZATION VARIANCE AND THE
GRADIENT VARIANCE
This section shows the connection between the gradient variance and the optimization variance in Definition 1.
For simplicity, we ignore q in OVq(x) and denote g(TB)−ETBg(TB) by g̃(TB). Then, the gradient variance Vg can be written as:
Vg = ETB [ ‖g(TB)− ETBg(TB)‖ 2 2 ] = ETB [ g̃(TB)T g̃(TB) ] . (17)
Denote the Jacobian matrix of the logits f(x;θ) w.r.t. θ by Jθ(x), i.e.,
Jθ(x) = [∇θf1(x;θ),∇θf2(x;θ), ...,∇θfc(x;θ)] , (18)
where fj(x;θ) is the j-th entry of f(x;θ), and c is the number of classes.
Using first order approximation, we have:
f(x;θ + g(TB)) ≈ f(x;θ) + Jθ(x)T g(TB), (19)
and OV (x) can be written as:
OV (x) ≈ ETB
[ g̃(TB)TJθ(x)Jθ(x)T g̃(TB) ] f(x;θ)T f(x;θ) + ETB [O (‖g(TB)‖2)] ≈ ETB [ g̃(TB)TJθ(x)Jθ(x)T g̃(TB) ] f(x;θ)T f(x;θ) . (20)
The only difference between Ex [OV (x)] and Vg is the middle weight matrix Ex [ Jθ(x)Jθ(x) T
f(x;θ)T f(x;θ)
] .
This suggests that penalizing the gradient variance can also reduce the optimization variance.
Figure 8 presents the curves of Vg in the training procedure. It can be observed that Vg also shows some ability to indicate the generalization performance. However, compared with the results in Figure 2, we can see that OV demonstrates a stronger power for indicating the generalization error than Vg . More importantly, Vg loses its comparability when the network size increases, while OV
can be more reliable to architectural changes with the middle weight matrix Ex [ Jθ(x)Jθ(x) T
f(x;θ)T f(x;θ)
] to
normalize Vg , which is illustrated in Figure 7.
We also notice that ‖ETBg(TB)‖ 2 2 is usually far less than ETB ‖g(TB)‖ 2 2, hence Vg and the gradient norm ETB ‖g(TB)‖ 2 2 almost present the same curves in the training procedure.
VGG11
VGG13
VGG16
C BEHAVIORS OF DIFFERENT LOSS FUNCTIONS
Many different loss functions can be used to evaluate the test performance of a model. They may have very different behaviors w.r.t. the training epochs. As shown in Figure 9, the epoch-wise double descent can be very conspicuous on test error, i.e., the ZO loss, but barely observable on CE and MSE losses, which increase after the early stopping point. This is because at the late stage of training, model outputs approach 0 or 1, resulting in the increase of the CE and MSE losses on the misclassified test samples, though the decision boundary may be barely changed. When rescaling the weights of the last layer by a positive real number, the ZO loss remains the same because of the untouched decision boundary, whereas the CE and MSE losses are changed. Thus, we perform bias-variance decomposition on the ZO loss to study epoch-wise double descent.
D VGG11 ON SVHN BY ADAM OPTIMIZER WITH LEARNING RATE 0.001
Training VGG11 on SVHN by Adam optimizer with learning rate 0.001 is unstable, as shown in Figure 14(a). Figure 10 shows the test error and optimization variance. For 0% and 10% label noise, the test error stays large (the test accuracy is low) for a long period in the early phase of training. The optimization variance is also abnormal.
E LOSS, BIAS AND VARIANCE W.R.T. DIFFERENT LEVELS OF LABEL NOISE
Label noise makes epoch-wise double descent more conspicuous to observe (Nakkiran et al., 2020). If the variance is the major cause of double descent, it should match the variation of the test error
when adding different levels of label noise. Figure 11 shows an example to compare the loss, variance, and bias w.r.t. different levels of label noise. Though label noise impacts both the bias and the variance, the latter appears to be more sensitive and shows better synchronization with the loss. For instance, when we randomly shuffle a small percentage of labels, say 10%, a valley clearly occurs between 20 and 50 epoches for the variance, whereas it is less obvious for the bias. In addition, it seems that the level of label noise does not affect the epoch at which the loss reaches its first minimum. This is surprising, because the label noise is considered highly related to the complexity of the dataset. Our future work will explore the role label noise plays in the generalization of DNNs.
Loss
Var
Bias
F BIAS AND VARIANCE TERMS W.R.T. DIFFERENT LOSS FUNCTIONS
G BIAS AND VARIANCE TERMS W.R.T. DIFFERENT OPTIMIZERS AND LEARNING RATES
H DETAILED INFORMATION OF THE SMALL CNN MODEL
We present the detailed information of the architecture trained with small numbers of training samples. It consists of two convolutional layers and two fully-connected layers, as shown in Table 2.
I OPTIMIZATION VARIANCE AND TEST ACCURACY W.R.T. DIFFERENT OPTIMIZERS AND LEARNING RATES | 1. What are the main contributions and novel aspects introduced by the paper regarding epoch-wise double descent phenomena?
2. What are the weaknesses of the paper compared to prior works, particularly in evaluating optimization variance?
3. Do you have any questions regarding the paper's methodology or results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
The paper under review studies the epoch wise double descent phenomena empirically. The epoch wise double descent phenomena is the observation that the risk of a large neural network trained with SGD first decreases, then increases, and finally decreases again as a function of the epochs or SGD steps. In addition, it proposes a quantity called ``optimization variance (OV)'', and it demonstrate that OV correlates with the test error. Based on this observation, it proposes to early stop when the OV reaches a minimum.
Strong points are that the paper i) is an early work studying epoch-wise double descent, and that ii) the newly introduced optimization variance is an interesting concept and might very well be useful for finding good early stopping points.
Weak points are that i) the findings about epoch-wise double descent are not conclusive, and that ii) optimization variance is not sufficiently evaluated to judge its usefulness. Both is discussed in more detail in Comments 1-6 below.
Based on comments 1-6 below I think that the paper's conclusions are not yet supported by the papers experiment. I do think that the paper can be developed into a very nice contribution, if more substantive findings about epoch-wise double descent can be obtained, and/or if the value of the newly introduced optimization variance is evaluated more thoroughly (e.g., through simple theory, or through more rigorous experiments).
Comments: 1/ The paper's main finding on epoch-wise double descent is that the variance dominates the epoch-wise double descent of the test error'' meaning that it is mainly the variance that contributes to epoch-wise double descent''. This statement is overly general, because it is drawn from observations for one particular setup (20% label noise, CIFAR10 and SVHN). If the label noise is less, it becomes apparent that both the variance and bias work together (as a sum) to generate a double-descent like curve. Adding curves for other distributions (e.g., less label noise, other datasets) can illustrate that.
2/ The paper's main contribution to the study of epoch-wise double descent is the empirical bias-variance decomposition. However, this approach does not enable to draw conclusions on why double descent occurs, as the paper notes itself ``the reason why the variance begins to diminish at the late phase of training is still unclear''.
3/ The paper finds a strong empirical correlation between the test error and the newly introduced optimization variance (OV) (Fig. 2), and proposes to early stop based on the OV. However, Fig. 2 also shows when this fails: c) ResNet 34 on CIFAR-10 shows that OV is minimal at 250 epochs, while the accuracy is highest after about 15-20 epochs. Of course, such an early stopping rule is allowed to fail sometimes, but the evidence that this is a good early stopping rule is not convincing.
4/ The value of OV would be much more convincing if the paper could provide a theoretical result even for a simple toy case like a linear model. It would also be more convincing if the benefits of OV hold for a broad set of models; the paper points out that it doesn't generalize to a large class of models: ``we need to point out that we did not observe a strong correlation between the OV and the test accuracy when using significantly different architectures''.
5/ The paper omits a recent reference on epoch-wise double descent: https://arxiv.org/abs/2007.10099, which is very related because it explains epoch-wise double descent as overlapping bias variance tradeoffs for a variety of setups, both empirically and practically.
6/ As https://arxiv.org/abs/2007.10099 finds, epoch-wise double descent can bit mitigated, and this can typically improves performance. How does the paper's approach to find the early stopping point work in this setup, where double descent is mitigated?
Minor comments: ``20% labels of the training data were randomly shuffled to strengthen epoch-wise double descent'': This is misleading as it suggested that with less or zero label noise, epoch-wise double descent does occur, but it does not for the models and setup considered in the paper.
UPDATE: Thanks for the response, I have responded below and kept the score constant. |
ICLR | Title
Conditional Invariances for Conformer Invariant Protein Representations
Abstract
Representation learning for proteins is an emerging area in geometric deep learning. Recent works have factored in both the relational (atomic bonds) and the geometric aspects (atomic positions) of the task, notably bringing together graph neural networks (GNNs) with neural networks for point clouds. The equivariances and invariances to geometric transformations (group actions such as rotations and translations) so far treat large molecules as rigid structures. However, in many important settings, proteins can co-exist as an ensemble of multiple stable conformations. The conformations of a protein, however, cannot be described as input-independent transformations of the protein: Two proteins may require different sets of transformations in order to describe their set of viable conformations. To address this limitation, we introduce the concept of conditional transformations (CT). CT can capture protein structure, while respecting the constraints on dihedral (torsion) angles and steric repulsions between atoms. We then introduce a Markov chain Monte Carlo framework to learn representations that are invariant to these conditional transformations. Our results show that endowing existing baseline models with these conditional transformations helps improve their performance without sacrificing computational efficiency.
1 INTRODUCTION
The literature on geometric deep learning has achieved much success with neural networks that explicitly model equivariances (or invariances) to group transformations (Cohen & Welling, 2016; Maron et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020). Among applications to physical sciences, group equivariant graph neural networks and transformers have specifically found applications to small molecules, as well as large molecules (e.g. proteins) with tremendous success (Klicpera et al., 2020; Anderson et al., 2019; Fuchs et al., 2020; Hutchinson et al., 2021; Satorras et al., 2021; Batzner et al., 2021). Specifically, machine learning for proteins (and 3D macromolecular structures in general) is a rapidly growing application area in geometric deep learning, (Bronstein et al., 2021; Gerken et al., 2021). Traditionally, proteins have been modeled using standard 3D CNNs (Karimi et al., 2019; Pagès et al., 2019), graph neural networks (GNNs) (Kipf & Welling, 2016; Hamilton et al., 2017), and transformers (Vaswani et al., 2017). More recently, several works Jing et al. (2020; 2021); Jumper et al. (2021); Hermosilla et al. (2021) have enriched the above models with neural networks that are equivariant (invariant) to transformations from the Euclidean and rotation groups. While equivariance (invariance) to the transformations in these groups are necessary properties for the model, unfortunately, they are limited to only capture rigid transformations of the input object.
However, these models may not yet account for all invariances of the input pertinent to the downstream task. And the transformations they are invariant to do not depend on the input. For instance, invariance to the Euclidean group restricts the protein representation to act as if the protein were a rigid structure, regardless of the protein under consideration. However, treating proteins as rigid structures may not be optimal for many downstream tasks. And different proteins may have different types of conformations (protein 3D structures with flexible side chains) (Harder et al., 2010; Gainza et al., 2012; Miao & Cao, 2016) . The existing rigid body assumption in protein representations may hurt these methods in datasets & downstream tasks that require protein representations to be invariant to a specific set of protein conformations. For example, for most proteins, regardless of their side chain conformation (as long as viable) under consideration – their protein fold class/ other scalar properties remain the same, their mutation (in)stability remains unaltered, protein ligand binding affinity (apart from changes at the ligand binding site) remain the same, etc.
In light of this limitation of current methods, a question naturally arises: Is it possible to learn conformer-invariant protein representations?
Our Approach: We propose a representation method where the set of symmetries in the task are input-dependent, which we denote conditional invariances. For our specific application, we model every protein as a directed forest, where every amino acid forms a directed tree. We leverage the constructed directed forest of the protein to sample viable protein conformations (where viability is checked with Protein structure validation tools such as Molprobity (Davis et al., 2007; Chen et al., 2010)), which is additionally coupled with a Markov chain Monte Carlo (MCMC) framework to create data augmentations to train the neural network to learn conformer invariant representations.
Our contributions can be summarized as follows: • We provide guiding principles for defining conditional invariant representations as inductive biases,
which serves as a generalization of group invariant neural networks. • We then provide a principled strategy to sample conformations for any given protein from the
support of its protein conformer distribution (which consists of all its viable protein conformations) which captures their true flexibility. Viable conformations respect domain specific constraints such as those on dihedral angles, steric repulsions, among others. This is particularly useful in the scenario when the set of viable transformations is different across different proteins.
• Further, we develop an MCMC-based learning framework which guides the sampling of protein conformations such that the (asymptotic empirical) average representation obtained by all viable conformations of a protein are identical.
• Finally, we perform experimental evaluation of our proposal, where endowing baseline models with our proposed strategy (via an implicit data augmentation scheme) shows noticeable improvements on multiple different classification and regression tasks on proteins.
2 CONDITIONAL INVARIANCES FOR PROTEINS
The symmetry of an object is the set of transformations that leaves the object invariant. The notion of symmetries, expressed through the action of a group on functions defined on some domain, has served as a fundamental concept of geometric deep learning. In this section, we start by defining the concept of input-dependent conditional symmetries and then proceed to specifically tailor these conditional symmetries that are both computationally efficient and useful for representing protein conformations. Some group theoretic preliminaries are presented in the Appendix A.2. Definition 2.1 (Conditionally symmetric-invariant functions). A function f : Ω→ R, is said to be conditionally symmetric-invariant if
f(tx · x) = f(x) ,∀tx ∈ Sx, ∀x ∈ Ω, where Sx is a set of transformations unique to element x and tx : Ω→ Ω.
It is easy to see that conditionally invariant functions are a generalization of group invariant functions where Sx ≡ G ∀x ∈ Ω where G is a group. The above definition is motivated by the fact that representations for proteins may not necessarily be limited to be invariant just to group actions, but to a more general set of protein specific transformations. We detail this motivation next.
A protein molecule is composed of amino acids (say n amino acids), where each atom in the protein belongs to one amino acid ∈ {1, . . . , n}. Excluding the hydrogen atoms, every amino acid contains four atoms known as the backbone atoms (see Figure 1), and other atoms which belong to the side chain. Traditionally, protein structures are solved by X-ray crystallography or cryo-EM and the obtained structure is normally considered a unique 3D conformation of the molecule. Molecules available via the Protein Data Bank (Berman et al., 2000) generally include only unique sets of coordinates to demonstrate the 3D structures, which lead to the unique answer as ‘gold standard’ in structure prediction, such as protein structure prediction competitions - CASP (Kryshtafovych et al., 2019). Under these assumptions protein side-chain conformations are usually assumed to be clustered into rotamers, which are rigid conformations represented by discrete side-chain dihedral angles.
However, works over the past decade (Harder et al., 2010; Gainza et al., 2012; Miao & Cao, 2016) have observed that — while the backbone atoms of all n amino acids in a protein molecule together (i.e. 4n atoms) form a rigid structure, its side chains can exhibit flexible (continuous and discrete) conformations beyond clustered rotamers. The underlying goal of our work is to capture this inherent flexibility of proteins and to ensure different conformers of the same protein get identical representations (More details – Appendix A.10). The above problem of capturing protein conformations is further compounded by the fact that protein conformations are often times unique to the protein - i.e. conformations exhibited by a protein are input (protein) specific. The desiderata, therefore is a model which outputs conformation invariant representations when the conformations (symmetries) exhibited by a protein varies across proteins when access to only a single protein conformer is available.
We denote a protein as a tuple p = (V,Xs,Xp) (data from pdb files) where V is the set of nodes (every atom is a node) in the protein, Xs ∈ Rm×d, d ≥ 1 is the scalar atom feature matrix associated with the atoms, Xp ∈ Rm×3 (where m denotes the number of atoms in the protein) are the positional coordinates associated with the atoms in the protein. Without loss of generality, we number the nodes in V = {1, . . . ,m} following the same ordering of the rows in Xs,Xp.
Definition 2.2 (Rigid Backbone Protein Conformations). For an m-atom protein p with atom positional coordinates Xp ∈ Rm×3 (recall that by definition p contains information about Xp), where Cp ⊂ Rm×3 denotes the set of viable conformations of p, which keeps the positional coordinates associated with the backbone fixed. We use Tp ⊂ Rm×3×3 (where a 3x3 matrix is associated with every single atom) to denote the set of non-isometric transformations of p, which yield the set Cp i.e. ∀cp ∈ Cp, ∃tp ∈ Tp, s.t. for an atom with index i ∈ {1, . . . ,m}, the new position of atom i is Xp[i]tp[i] T = cp[i], Xp[i] ∈ R1×3, tp[i] ∈ R3×3.
Tp forms a set of transformations which acts on the protein atomic coordinates via matrix multiplication. Two elements of Tp (with corresponding matrices of individual atoms and then aggregating for the protein as a whole) may or may not combine via matrix multiplication (as the binary operation) to result in another element of Tp i.e. matrix multiplication is not necessarily closed in Tp. A concrete example of the non group structure of protein conformations is provided in Appendix A.3.
Unfortunately, sampling transformations from Tp to obtain viable protein conformations is an unattainable task, since this would entail first sampling a transformation from Rm×3×3 and then verifying if the transformation is viable (and the vast majority of transformations are not viable). We will address this issue using an MCMC method - which is then combined with MCGD training of the neural network (Sun et al., 2018) to learn conformation invariant protein representations where different viable conformations are obtained via MCMC are used in every batch. Markov Chain Gradient Descent (MCGD) is a variant of Stochastic Gradient Descent (SGD), where the samples are drawn from the trajectory of a Markov Chain. We note that, while there have been prior Monte Carlo and MCMC based techniques for sampling protein conformations (Boomsma et al., 2013; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009), the overarching goals of the existing MCMC methods are significantly different compared to ours. The existing methods define a distribution over the conformations (based on the properties of the bonds, etc.) and then sample highly probable conformations from this distribution. Our method, is much simpler and only requires that the chain being used is ergodic and is invariant to the actual form of the distribution. Therefore, we devise a simple chain whose transitions are easier to sample and don’t require us to compute conditionals of an energy based model. However, we also note that the existing Markov chains can be used as drop-in replacements to sample conformations as part of our framework. Before specifying our MCMC procedure, we specify the invariance requirements for learning protein representations.
A symmetric-invariant function for proteins (per Definition 2.1), should be invariant to (i) transformations which change its atomic coordinates to any of its rigid backbone conformations in Cp (ii) transformations which change the atomic coordinates of all its atoms via rigid body transformations — group actions such as rotations/ translations (iii) a combination of the two above which transforms it into one of its viable conformations and is then further acted upon by rigid body transformations.
3 OBTAINING VIABLE CONFORMATIONS
Before we describe our MCMC procedure to sample atomic coordinates from the set of rigid backbone conformations , we need a way to sample from the local neighborhood of a conformation.
i = SO(3)∀i 6= 1. We note that in protein molecules not all transformations about a
node would be allowed due to steric repulsions between atoms as well as potential overlaps of atoms.
3.1 A SIMPLE PROTEIN CONFORMATION SAMPLING STRATEGY
Akin to Irbäck & Mohanty (2006), to efficiently sample a candidate conformation from Cp, we will follow a multi-step approach: First, we (i) construct a directed tree for every amino acid in the protein to capture the inherent flexibility in protein structures. Then, we (ii) leverage a directed forest of the protein (described next) to transform its atomic coordinates and check for viability.
Part 1. Directed forest construction. Using the input protein molecule, we construct a directed forest using the following three step procedure: 1. Each amino acid in the protein gets its own directed tree. The atoms in the amino acid’s backbone
form the base (root) nodes (i.e., there can be multiple base nodes in every amino acid). This is illustrated as node 1 in Figure 2’s tree. The set of base nodes of all the amino acids in the protein are jointly referred to as the base nodes of the protein (or equivalently of the directed forest).
2. The atoms adjacent to the amino acid’s backbone via a covalent bond become their immediate children in the tree. For example, the node SC1 forms the child of αC in the Figure 1.
3. The other covalent bonds of the children establish the grandchildren of the root, and so on, until all the atoms in the amino acid are a part of the tree. For example, SC2 and SC3 become the children of the node SC1 in the directed tree constructed for Figure 1.
Figure 2 is an illustration of some directed tree constructed with the above procedure, where node 1 is the root node, with nodes 2, 4 as its children and so on. It is important to note that, the above procedure ensures that in the constructed tree, each (non-root) node has a single parent and may have arbitrarily many children. Further, cycles/rings which are present in the molecule are broken on the basis of bond length, i.e., larger bonds are chosen before smaller bonds. Ties between same-length bonds are broken arbitrarily. While multiple directed forests can be created for a given protein since bonds of equal length are broken arbitrarily, in our implementation, given a protein, the tree is fixed because the directed forest for every protein is created exactly once (in the preprocessing step) - where the ties are broken deterministically based on their ordering in the “pdb file".
Next, we shall look at how the atomic coordinates are transformed using the directed tree.
Part 2. Conditional transformations of atomic coordinates. Let Xp be the available input atomic coordinates of the protein. To obtain a new candidate protein conformation (say X ′p ∈ Rm×3), we sample uniformly, one of the directed trees from the forest. Next, we transform the atomic coordinates of the subset of atoms in this directed tree. This set is denoted by A ⊂ V . That is, in the candidate conformation, X ′p[i] = Xp[i], ∀i ∈ V \A — or equivalently tp[i] = I, ∀i ∈ V \A. Starting with the root node, we use breadth first search to traverse the directed tree and update all the descendants of the node currently being considered. For the backbone atoms, we leave Xp[i] = X ′p[i] (or equivalently tp[i] = I), i.e., atomic coordinates are unchanged for backbone atoms. Next, we use pointed sets to describe approximate transformations on a given protein through a directed tree .
Definition 3.1 (Pointed sets). A pointed set is an ordered pair (X,x0), where X is a set and x0 ∈ X is called the basepoint.
For instance, let Mi = {Xp[j] : j ∈ Ni}, where Ni is the set containing i and all its descendants in its directed tree. Then, (Mi,Xp[i]) is a pointed set.
Let the parent of node i be denoted by ◦ and let G(g◦·Xp[◦])i (in practice, SO(3)) be the group from which actions gi ∈ G (g◦·Xp[◦]) i are sampled uniformly — that can transform the positional coordinates of node i and its descendants about its parents in the directed tree, where g◦ ·Xp[◦] is the transformed atomic coordinates of the parent of node i (we use BFS - so a top down approach). In the case of the root node (for example 1 in Figure 2), the actions are just drawn from Groot (or G1 in Figure 2).
The associated transformations of the atomic coordinates of the atoms in Ni can be given by the mapping hi : (Mi,Xp[i])→ (R|Ni|×3, g◦ ·Xp[◦] + gi · (Xp[i]−Xp[◦])) where the coordinates of node j ∈ Ni are updated as: X ′p[j]← g◦ ·Xp[◦] + gi · (Xp[j]−Xp[◦]) (1) If the node j 6= i, its coordinates, i.e., X ′p[j] can be further updated as we traverse the tree. When G
(g◦·Xp[◦]) i = SO(3) ∀i ∈ V , every node in the directed tree can be rotated about its parents as
shown in Figure 3 (Appendix A.4) - which is the side chain of the amino acid shown in Figure 1.
Algorithm 1 Sampling a viable conformation of cp 1: Input: Conformation cp of protein p 2: Obtain the directed forest corresponding to the protein with n directed trees where n is the
number of amino acids in the protein. 3: repeat 4: Sample y ∼ UNIF({1, 2, · · · , n}) and obtain the corresponding directed tree 5: Traverse through directed tree via BFS and update atomic coordinates via Equation (1) to
obtain c′p a candidate conformation of cp— where group actions are always sampled uniformly from SO(3).
6: Check if c′p is a valid protein conformation via protein structure validation tools 7: until c′p is a valid conformation 8: Return: c′p, a valid conformation of cp
However, each candidate X ′p obtained via the above transformation process may not belong to Cp. To that end protein structural validation tools may be used to check the validity of these candidates. Molprobity (Davis et al., 2007; Chen et al., 2010; Williams et al., 2018) is such a tool,which outputs scores corresponding to different metrics (such as number of dihedral (torsion) angles outside the allowed threshold, number of atom-atom clashes arising due to steric repulsions, etc) which can be compared with the originally provided conformation Xp. Note, that while Molprobity is not perfect and we may end up developing models more invariant than just to viable conformations, alternatively (more precise) tools including those which allow for the evaluation of molecular forcefields, can be used to check validity. Additionally, the goal of this work is not to use Molprobity/ other protein structure validation tools as a part of generative models and a more invariant model does not necessarily affect performance on unseen but valid protein test data.
Our algorithm to sample viable conformations is summarized in Algorithm 1. The repeat-loop proposes a conformation in the neighborhood of the current conformation cp which is only accepted when the proposed conformation c′p is a valid conformation. As such it is an acceptance-rejection sampling algorithm. We will see later that the actual sampling probability distribution is not required to be known by our procedure. Our MCMC procedure defined next only requires Algorithm 1 to follow certain conditions which we argue it follows.
3.2 SAMPLING CONFORMERS VIA MCMC
We now describe the MCMC procedure that, starting at any conformer configuration c(0)p ∈ Cp we will, in steady state, sample conformations according to a unique stationary distribution which depends on the protein p, where Cp is the set of all viable conformations of p as defined in Definition 2.2.
To that end we first define the transition kernel of the Markov chain. Algorithm 1 samples a conformation c′p in the neighborhood of a given conformation cp using a directed forest. The neighborhoodN (cp) of the conformation cp is loosely defined as the set of all possible conformations c′p which may result from running Algorithm 1 with cp as input. We denote the probability distribution induced by Algorithm 1 as the transition kernel κ(c′p|cp) which has support N (cp). Definition 3.2 (Conformer Sampling Markov Chain (CSMC) Φp). We define the conformer sampling Markov chain Φp as a time-homogenous Markov chain over the state space Cp with transition kernel
κ as defined above, where Cp is the set of valid conformations associated with given protein p as defined in Definition 2.2.
The consistency of our learning procedure does not depend on the precise definition of κ. However, our procedure relies on the fact that any conformer c′p ∈ Cp can be sampled by κ in a finite number of steps, starting from a conformer cp. We use this fact to show that the chain Φp converges to a unique stationary distribution regardless of the initial conformer. (All proofs in Appendix A.5). Proposition 3.3. Given the CSMC Φp from Definition 3.2 whose transitions are governed by κ which is implicitly defined by Algorithm 1, for any pair of conformers cp, c′p ∈ Cp, there exists τp < ∞, independent of cp, such that P τp Φp (cp, c ′ p) > 0, where P τp Φp is the τp step transition probability.
Next, we show the existence of a unique steady state distribution of our Markov chain. Proposition 3.4. The CSMC Φp defined in Definition 3.2 is uniformly ergodic if Proposition 3.3 is satisfied. Specifically there exists a unique steady state distribution πp such that for all cp ∈ Cp, ‖PnΦp(cp, ·)− πp(·)‖ ≤ C R
n, where C <∞ and R < 1 are constants that depend on Φp, PnΦp is the n step transition probability and ‖ · ‖ is the `1 norm.
Given that we now have the ability to draw samples from a Markov chain which achieves a unique stationary distribution πp on Cp, we shall leverage this next, in our learning framework to learn conformer invariant representations of proteins.
4 LEARNING FRAMEWORK
We shall employ a learning strategy, where we use the viable conformations obtained via the Markov chain to learn a function which outputs conformer invariant representations.
Let D = {(xj , yj)}Nj=1 be the input data (where we have a single conformer for every protein in the dataset). We shall consider a single data point (xj , yj) henceforth, where xj = (V,Xs,Xp) is the input protein. We consider a supervised learning setting where yj is the associated target. We consider both classification and regression tasks.
Let Cj = {cji} be the set of viable conformations of protein xj . We only consider viable conformations which defer only in the atomic coordinates matrix Xp. For a given protein xj , we shall use xji to denote protein xj but with Xp of the original protein modified by cji , and use Sj = {xji} to denote the set of all viable xji i.e. the state space.
Let f : Ω → Rd, d > 0 be any function with learnable parameters θf , (for e.g. any neural network model such as 3D CNN, GNN, Transformer, LSTM, etc.) which takes a protein (in the form (V,Xs,Xp)) as input and outputs protein representations. The function f need not necessarily output conformer invariant representations of the protein xj . Then, a simple way to obtain conformer invariant representations of protein xj (apart from using trivial functions such as a constant function or function independent of Xp) is computing its expected value over all its viable conformations. We shall denote f to denote a function which outputs conformer invariant representations of a protein.
f(xj ; θ f ) = EX∼πj [f(X; θf )] (2)
where πj(·) is the steady state distribution of the Markov chain associated with the protein xj . As such, f(xj ; θf ) = f(x′j ; θ
f ) for any viable protein conformation x′j ∈ Sj . Subsequently, to learn an optimal f (which we denote by f?), we wish to minimize the loss, defined as follows:
L(D; θf , θρ) = 1 N N∑ j=1 L̂(yi, ρ(f(xj ; θ f ); θρ))
= lim k→∞
1
N N∑ j=1 L̂(yi, ρ( 1 k k∑ i=1 f(xij ; θ f ); θρ)) (3)
where L̂ is a convex loss function e.g. cross entropy loss, ρ is some differentiable function with parameters θρ (in practice, an MLP) and lim
k→∞ 1 k
∑k i=1 f(x i j ; θ f ) can be employed as an asymp-
totically unbiased and consistent estimate of f where xij is the i th sample from the Markov chain corresponding to the protein xj
Since Equation (3) is computationally intractable, we employ a surrogate for the loss given by:
J(D; θf , θρ) = lim k→∞
1
N N∑ j=1 1 k k∑ i=1 L̂(yi, ρ(f(x k j ; θ f ); θρ)) (4)
Observe that in Equation (4), the expectation over the conformations is now outside the L̂ and ρ functions while still remaining conformation invariant (however, the optimal parameters corresponding to minimizing J are different from minimizing L). Following (Murphy et al., 2019b;a), we note that, when ρ is the identity function, Equation (4) serves as an upper bound for Equation (3) via Jensen’s inequality. However, learning representations by averaging over multiple conformations (in training) is still computationally expensive (back-prop and high variance issues). Next, we show how MCGD (Sun et al., 2018) can be leveraged to make this tractable (more details in Appendix A.10).
Making the case for simplicity, we next show convergence properties using the full batch setting. Note that is procedure and proofs are equally applicable in the mini-batch setting with minor modifications.
Full-batch training: We consider the data as a single batch B = {(x1, y1), ..., (xj , yj), ..., (xN , yN )} and for a data point (xj , yj), denote the corresponding steady state distributions of the corresponding Markov chains using πj and the state space as Sj .
We denote all learnable parameters of ρ ◦ f as θ ≡ (θf , θρ) and consider θ ∈ Θ ⊆ Rm, m > 0 such that the function ρ ◦ f may be non-convex for some values of θ. We consider a sequence of step sizes (γk)∞k=0 which satisfy:∑
k γk = +∞, ∑ k ln k · γ2k < +∞ (5)
and follow a gradient scheme where parameters are updated as:
θk = θk−1 − γkZk, k > 0 (6)
where Zk = 1N N∑ j=1 ∇θL̂(yj , ρ(f(xk−1j ; θ f k−1); θ ρ k−1)) and x k−1 j is the k − 1 th sample from the Markov chain corresponding to the data point (xj , yj) and employ the Markov chain gradient descent procedure (Sun et al., 2018) where the loss in epoch k, k > 0 of training is obtained by
Ĵk = 1
N N∑ j=1 L̂(yj , ρ(f(x k−1 j ; θ f k−1); θ ρ k−1)).
and minimizes the surrogate loss given in Equation (4) when k is sufficiently large so that the Markov chain has converged to its unique steady state distribution.
Next, we list the set of assumptions we make to ensure convergence of our conformer invariant MCGD procedure. Assumption 4.1. We make the following assumptions:
1. For any θ ∈ Θ and xkj ∈ Sj , the function f is differentiable ∀j 2. supθ∈Θ,xkj∈Sj{||∇θ ρ ◦ f(x k j )||} < +∞ i.e. the gradients are bounded. 3. ∀xkj ∈ Sj , ∀θ1, θ2 ∈ Θ, ||∇θ1 ρ ◦ f(xkj )−∇θ2 ρ ◦ f(xkj )|| < L||θ1 − θ2|| for some L ≥ 0 i.e., the gradients are L−Lipschitz. 4. Exkj∼πj [∇θ ρ ◦ f(x k j )] = ∇θ Exkj∼πj [ρ ◦ f(x k j )]
Next, we formally state the convergence of our optimization procedure to optimal parameters θ? which yield conformer invariant protein representations. Proposition 4.2. Let the step sizes satisfy (5) and the function parameters θ be updated as (6) and Assumption 4.1 hold, then the MCGD optimization enjoys properties of almost sure convergence to the optimal θ.
The use of Markov chain gradient descent procedure to optimize the conformer invariant learning procedure optimizes the objective J in Equation (4), and thus has the following implication on how outputs should be calculated at inference time:
Inference time: We estimate f using an empirical average of f evaluated over conformations (in practice, average final layer [before softmax] representations) visited by the Markov chain :
ˆ fk(xj ; θ f ) = 1
k k∑ i=1 f(x (i) j ; θ f ) , (7)
where x(i)j is the ith state visited by the Markov chain started at xj . Since the Markov chain has a unique steady state distribution, ˆfk(xj ; θ f ) is both asymptotically unbiased and consistent. That is
lim k→∞
ˆ fk(xj ; θ f ) a.s. = f(xj ; θ f ).
5 RESULTS
We evaluate our proposed augmentation procedure on multiple different tasks from the ATOM3D dataset (Townshend et al., 2020). Results are provided in Table 1.
TASKS ON ATOM3D DATASETS
ATOM3D (Townshend et al., 2020) is a unified collection of datasets concerning the 3D structure of proteins. These datasets are specifically designed to provide a benchmark for ML methods which operate on 3D molecular structure, and represent a variety of important structural, functional, and engineering tasks. As part of our evaluation, we perform the (a) Protein Structure Ranking (PSR) (b) Ligand Efficacy Prediction (LEP) (c) Protein Mutation Stability Prediction (MSP) (d) Ligand Binding Affinity (LBA). We describe each of the tasks in detail in Appendix A.7. For all the datasets and tasks we report the same metrics as proposed by the authors in Townshend et al. (2020).
MODEL, BASELINES AND DISCUSSION
We endow the vector gated GVP-GNN model (Jing et al., 2021), a GNN using GCN (Kipf & Welling, 2016) layers and the E(n) GNN Satorras et al. (2021) with conditional transformations from our proposed MCMC method. It is important to note that when the positions of the atoms are altered the protein graph which are inputs to the base encoders are changed appropriately in every epoch. As an ablation study, we also provide a strategy where all the transformations are created from “gold standard" Xp rather than via the MCMC method, i.e., the MCMC is restarted at every epoch during training in the Appendix.
Looking at Table 1, we note that models augmented with our proposed conformer invariance framework, in general, outperforms the baseline models in multiple tasks for which conformer invariance is a requirement. In the LEP task, the proposed addition outperforms the baseline models for the E(n) GNN and the GVP-GNN, but not for the GNN(GCN). The LEP result for the GNN(GCN) is an oddity here, where a GNN which doesn’t explicitly incorporate any atomic positional information outperforms all other models with information about atomic coordinates.
Additional results and ablation studies are presented in Appendix A.9.
COMPUTATIONAL COMPLEXITY & SCALABILITY:
The directed forest for every protein is computed only once as a preprocessing step (a fixed ≈ 5-10 minutes per dataset). At every epoch, for a given protein, we select only one among all its constituent amino acids & perform a sequence of 3× 3 matrix multiplications to obtain a new conformer. If k denotes the number of atoms in the side chain of an amino acid, and height of a directed tree is in the order of log(k), the computational complexity to obtain a conformation is O(k log k) (where k ≈ 15
in avg in our datasets), & this can be run in parallel for all proteins in a mini-batch. Molprobity (run in parallel again, currently runs externally on a web server via a command line call) adds 2s delay per minibatch (we could reduce to milliseconds if we integrated Molprobity into our code). The rejection rate from Molprobity is also very small (less than 1 reject in average across 100 molprobity calls). As an example, training (100 epochs) GVP-GNN - MSP requires ≈ 45 min, and LBA requires ≈ 34 min. In both cases our method requires an additional 4 min overhead. A complete training time table (on a Nvidia Tesla V100 GPU) for all models & datasets can be found in Appendix A.11.6 RELATED WORK Here, we present a high level summary of works related to ours. A more detailed comparison to the works mentioned below and others) are presented in Appendix A.6.
Group Equivariant Neural Networks: Group equivariant and invariant neural networks (Cohen & Welling, 2016; Lenssen et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020; Hutchinson et al., 2021; Fuchs et al., 2020; 2021; Dehmamy et al., 2020) help capture discrete and continuous symmetries of elements (e.g. images) by introducing group theoretic constraints as inductive biases in the neural network. While group equivariant neural network capture global symmetries, local symmetries of manifold spaces can be captured via gauge equivariant networks (Cohen et al., 2019; De Haan et al., 2020). (Lenssen et al., 2018; Gerken et al., 2021; Bronstein et al., 2021) provide a complete review of the theoretical aspects and applications of group equivariant neural networks.
Graph Neural Networks: Graph Neural Networks (Kipf & Welling, 2016; Hamilton et al., 2017; Battaglia et al., 2018; Xu et al., 2018) have gained renewed focus over the past few years and have found applications in recommender systems, biology, chemistry, and many other real world problems which can be formulated as graphs and currently serve as the state of the art in majority of node and graph classification/ regression tasks. Graph neural networks work on the principles of permutation equivariance/ invariances (also groups) and have exploited a message passing framework to learn powerful and expressive representations of nodes/ graphs.
Group Equivariant Graph Neural Networks: These networks combine continuous symmetries (lie groups) with permutation equivariances and has found applications with resounding success on small molecules (Anderson et al., 2019; Klicpera et al., 2020; Satorras et al., 2021; Batzner et al., 2021) which exhibit rigid body characteristics. Employing (Farina & Slade, 2021), would make the neural network excessively invariant and allow the protein to be more flexible (allows unviable conformations) than it truly is. More recently, they have also been applied to learning representations of proteins, which is discussed below.
Monte Carlo and MCMC Methods for Sampling Protein Conformations: There has been a lot of prior work – (Boomsma et al., 2013; Olsson et al., 2013; Antonov et al., 2016; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009) which sample protein conformations by either defining/inheriting a distribution over the conformations (majorly based on the properties of the bonds, etc.) These methods can seamlessly be plugged into our framework as long as the MCMC is ergodic.
Neural Networks for Representation Learning of Proteins: Protein representation has gained a lot of attention especially with the tremendous successes of Alphafold and Alphafold2. A variety of neural network architectures including 3D CNNs, LSTM’s and Transformers (treating the protein as a sequence) as well as graph neural networks have been employed to exploit the rigid body symmetries of proteins (Karimi et al., 2019; Pagès et al., 2019; Ingraham et al., 2019; Strokach et al., 2020; Baldassarre et al., 2021; Hermosilla et al., 2021; Jing et al., 2020; 2021).
Generative Models: Conformation generation models (Mansimov et al., 2019; Simm et al., 2020; Ganea et al., 2021; Xu et al., 2021b;a; Shi et al., 2021; Luo et al., 2021) have also gained attention recently with the goal to predict 3d structure of molecules given their 2d structure - our objective in this work is very different, but can be used to improve predictions of the aforementioned models.
7 CONCLUSIONS
This work addresses the limitations of current protein representation learning methods which are unable to learn conformer invariant representations — and hence unable to capture the inherent flexibility present in protein side chains pertinent to many downstream tasks. To address these, we introduced conditional transformations to capture protein structure, while respecting the restrictions posed by constraints on dihedral (torsion) angles and steric repulsions between atoms. Subsequently, we introduced a Markov chain Monte Carlo based framework to learn representations that are invariant to these conditional transformations.
A APPENDIX
A.1 COMMONLY USED ACRONYMS IN THE PAPER
1. G - group
2. Sx - Set of transformations unique to element x
3. tx - Transformation action which transforms the element x
4. m - Number of atoms in the protein under consideration
5. n - Number of amino acids in the protein under consideration
6. Xs - Matrix of Scalar features associated with atoms in the protein p
7. Xp - Matrix of atomic coordinates associated with atoms in the protein p
8. Cp - Set of conformations of protein p
9. SO(3) - Group of all rotations in 3D space
A.2 GROUP THEORY PRELIMINARIES
Definition A.1 (Group). A group is a setG equipped with a binary operation · : G×G→ G obeying the following axioms:
• for all g1, g2 ∈ G, g1 · g2 ∈ G (closure).
• for all g1, g2, g3 ∈ G, g1 · (g2 · g3) = (g1 · g2) · g3 (associativity).
• there is a unique e ∈ G such that e · g = g · e = g for all g ∈ G (identity).
• for all g ∈ G there exists g−1 ∈ G such that g · g−1 = g−1 · g = e (inverse). Definition A.2 (Group invariant functions). Let G be a group acting on vector space V . We say that a function f : V → R is G-invariant if f(g · x) = f(x) ∀x ∈ V, g ∈ G. Definition A.3 ((Left) Group Action). For a group G with identity element e, and X is a set, a (left) group action α of G on X is a function α : G×X → X that satisfies the following two conditions:
1. Identity: α(e, x) = x, ∀x ∈ X
2. Compatibility: α(g, α(h, x)) = α(gh, x)
We will use a short hand of g · x for α(g, x) when the action being considered is clear from context.
A.3 EXAMPLE DEMONSTRATING NON GROUP STRUCTURE OF A SET OF PROTEIN CONFORMATIONS
Consider X to be a protein with n atoms and m amino acids and let the set of viable conformations of X as {X1, X2, . . . Xi, . . . Xp}. Let X1 be the conformation available in our dataset.
In each step of the transition, only a few atoms (<< n) (atoms in a single amino acid of the entire protein- where the protein is made up of multiple hundreds of amino acids (m) in general) are subjected to an action from the SO(3) group here. The positions of all other atoms (outside that amino acid) in the side chain remain unaltered. So, in the Rn×3×3 matrix – most of the 3x3 entries are the identity matrix of 3 dimension.
Let T i1 ∈ Rn×3×3 where i ∈ {1, 2, . . . , p} be the transformation which yields conformation Xi from X1.
Now consider T 21 (subscript of 1 since we start conformation X1) which takes X1 to X2 and T 31 which takes X1 to X3 where T 2 1 and T 3 1 , do not act on the same amino acid in the protein. Now, however, consider the case where performing T 21 and T 3 1 (or the other way around) sequentially would result in a case where atoms in two different amino acids would overlap (or come too
close to each other causing steric repulsion) - therefore resulting in a non viable conformation i.e. a composition of T 21 and T 3 1 acting on X1, would not be present in the set of all allowed transformations T1 = {T 11 , . . . T i1, . . . T p 1 }. Therefore the set T1 is not closed and doesn’t form a group.
Also for two different conformations X1 and X2, their allowed transformation sets T1, T2 will not be identical (in the above example T 31 6∈ T2). It also easy to construct a case where ⋃p i=1 Ti is not a group and all actions from this group acting on any given Xi will not necessarily result in viable/ valid conformation.
A.4 FLEXIBILITY ALLOWED BY OUR PROPOSED MODEL
Our proposed model allows every node in the directed tree to be rotated about its parents. For example, for the side chain shown in Figure 1 (Main Paper), the allowed flexibility is Figure 3.
A.5 PROOFS OF PROPOSITIONS
First, we restate and prove Proposition 3.3.
Proposition A.4. Given the CSMC Φp from Definition 3.2 whose transitions are governed by κ which is implicitly defined by Algorithm 1 as described above. For any pair of conformers cp, c′p ∈ Cp, there exists τp < ∞, independent of cp, such that P τp Φp (cp, c ′ p) > 0, where P τp Φp
is the τp step transition probability.
Proof. Proof by construction. We prove the proposition by showing that one can construct a path (c (1) p = cp, . . . , c (t) p = c′p) such that c (i) p ∈ Cp and κ(c(i+1)p |c(i)p ) > 0, for all 0 < i < t, and t ≤ Tp. The trivial case where cp ≡ c′p is proved since every group contains the identity element — sampling the identity element for every node in the directed tree yields the same conformation. Since we consider only non backbone transforming conformations, for the non trivial case, a maximum of m− 4n atoms can differ in positions between any two conformations - where m is the number of atoms in the protein and n is the the number of amino acids in the protein n. Both m,n are finite and we are dealing with continuous conformers (and continuous group actions about every node — groups are closed under their associated binary action, and SO(3) is path connected). So we can traverse between conformers (until we reach the desired conformer) sequentially in a finite number of steps, by using the constructed directed forest - selecting a amino acid (which doesn’t violate the viability), fixing the positions of all other amino acids in the protein and rotating the side chain atoms in a single conformer to the final desired state. While this process may result in some side chains
being visited multiple times (due to viability constraints), considering continuous conformers and the SO(3) group (which is path connected) ensures we will never reach a state of deadlock. The second condition is satisfied because every group action has an inverse and we only use transformations from SO(3) for every node in the directed trees.
Next, we restate and and prove Proposition 3.4 Proposition A.5. The CSMC Φp defined in Definition 3.2 is uniformly ergodic if Proposition 3.3 is satisfied. Specifically there exists a unique steady state distribution πp such that for all cp ∈ Cp, ‖PnΦp(cp, ·)− πp(·)‖ ≤ C R
n, where C <∞ and R < 1 are constants that depend on Φp, PnΦp is the n step transition probability and ‖ · ‖ is the `1 norm.
Proof. By Proposition 3.3, Φp satisfies Doeblin’s condition as defined in page 396 of Meyn & Tweedie (2012) which states that for cp, c′p ∈ Cp, P Tp Φp (cp, c ′ p) > for some > 0
1. The uniform ergodicity then holds due to Theorems 16.2.3 and 16.2.1 from Meyn & Tweedie (2012).
Next, we restate and prove Proposition 4.2. We also restate the required assumptions for the proposition. Assumption A.6. We make the following assumptions:
1. For any θ ∈ Θ and xkj ∈ Sj , the function f is differentiable ∀j
2. supθ∈Θ,xkj∈Sj{||∇θ ρ ◦ f(x k j )||} < +∞ i.e. the gradients are bounded.
3. ∀xkj ∈ Sj , ∀θ1, θ2 ∈ Θ, ||∇θ1 ρ ◦ f(xkj )−∇θ2 ρ ◦ f(xkj )|| < L||θ1 − θ2|| for some L ≥ 0 i.e., the gradients are L−Lipschitz.
4. Exkj∼πj [∇θ ρ ◦ f(x k j )] = ∇θ Exkj∼πj [ρ ◦ f(x k j )]
Proposition A.7. Let the step sizes satisfy (5) and the function parameters θ be updated as (6) and Assumption 4.1 hold, then the MCGD optimization enjoys properties of almost sure convergence to the optimal θ.
Proof. Given that each protein has an associated time homogeneous Markov Chain with a unique steady state, independent of other proteins, the set of proteins in a mini-batch also form a Markov chain with a unique steady state. We then leverage Corollary 2 (Page 12) of Sun et al. (2018) along with Proposition 4.2 to ensure almost sure convergence to the optimal θ.
A.6 EXTENDED RELATED WORK
Here, we elaborate on the related works section in Section 6
Group Equivariant and Invariant Neural Networks: Group equivariant and invariant neural networks (Cohen & Welling, 2016; Lenssen et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020; Hutchinson et al., 2021; Fuchs et al., 2020; 2021; Dehmamy et al., 2020) help capture discrete and continuous groups symmetries of elements (e.g. images, point clouds). In this work, we learn representations which are invariant to symmetries which are not just groups, but to input dependent sets of transformations. To the best of our knowledge, our work is the first to consider conditional (input dependent) invariances. Prior works on learning invariant models have leveraged Monte Carlo procedures (Finzi et al., 2020; Murphy et al., 2019b) to learn to be invariant to transformations of the input. Alternatively, our work constructs a Markov Chain with a unique steady state and leverages MCGD (Sun et al., 2018) to make it computationally tractable. Secondly, the aformentioned approaches are limited to being invariant to transformations from any specified Lie group with a surjective exponential map/ permutation group, while our work is not limited to groups. Thirdly, our theory ensures that every example/ object in the dataset can have a different set of input dependent, conditional transformations
1We note that this is a simplified version of the actual statement which is defined on the σ-algebra over Cp denoted by σ(Cp). Our proof holds when c′p ∈ σ(Cp)
- and in fact can be seen as a generalization of the above works. In fact, the ablation study that we perform (Results provided in Appendix A.9 - Table 4) uses a Monte Carlo estimator and our MCMC procedure yields better performance than the Monte Carlo estimator. While group equivariant neural network capture global symmetries, local symmetries of manifold spaces can be captured via gauge equivariant networks (Cohen et al., 2019; De Haan et al., 2020). Gauge symmetries require the manifold to be smooth – which is not the case for proteins. Moreover, different proteins have different sets of viable conformations, which would not be able to captured by standard gauge equivariant neural networks. (Lenssen et al., 2018; Gerken et al., 2021; Bronstein et al., 2021) provide a complete review of the theoretical aspects and a wide variety of applications of group equivariant neural networks. A more comprehensive theoretical analysis of input dependent conditionally invariant neural networks is planned for future work.
Graph Neural Networks: Graph Neural Networks (GNNs) (Kipf & Welling, 2016; Hamilton et al., 2017; Battaglia et al., 2018; Xu et al., 2018) have gained renewed focus over the past few years and have found applications in recommender systems, biology, chemistry, and many other real world problems which can be formulated as graphs and currently serve as the state of the art in majority of node and graph classification/ regression tasks. Graph neural networks work on the principles of permutation equivariance/ invariances (Murphy et al., 2019a) (also groups) and have exploited a message passing framework to learn powerful and expressive representations of nodes/ graphs. Both molecular graphs (both small molecules and macromolecules) as well as graphs based on intra molecular distances have been used with GNNs, to achieve state of the art for many molecular datasets and tasks (Hu et al., 2020; Morris et al., 2020). Here, we leverage three graph based neural networks as baselines for our model.
Group Equivariant Graph Neural Networks: Group Equivariant GNNs combine continuous symmetries (lie groups such as SE(3), E(3), SO(3)) with permutation equivariances and has found applications with resounding success on small and large molecules (Anderson et al., 2019; Klicpera et al., 2020; Satorras et al., 2021; Batzner et al., 2021). However, the methods are only able to capture rigid body characteristics of molecules and while capturing the above lie group symmetries is also able to capture input depedent transformations. Employing (Farina & Slade, 2021), would make the neural network excessively invariant and allow the protein to be more flexible (allows unviable conformations) than it truly is. More recently, they have also been applied to learning representations of proteins, which is discussed below.
Monte Carlo and MCMC Methods for Sampling Protein Conformations: There has been a lot of prior work – (Boomsma et al., 2013; Olsson et al., 2013; Antonov et al., 2016; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009) which sample protein conformations by internal coordinate transformations. However, The goals of the existing MCMC methods are significantly different compared to ours. The existing methods define/inherit a distribution over the conformations (majorly based on the properties of the bonds, etc.) and then aim to sample highly probable conformations from this distribution. Our method, is much simpler and only requires that the chain being used is ergodic and is invariant to the actual form of the distribution. We note that the existing Markov chains can seamlessly be used as drop-in replacements to sample conformations as part of our framework as long as the MCMC is ergodic. We consider studying the impact of different MCMC methods (which sample from different distributions) and their influence on the performance in different tasks as important future work.
Neural Networks for Representation Learning of Proteins: Protein representation has gained a lot of attention especially with the tremendous successes of Alphafold and Alphafold2. A variety of neural network architectures including 3D CNNs, LSTM’s and Transformers (treating the protein as a sequence) as well as graph neural networks have been employed to exploit the rigid body symmetries of proteins (Karimi et al., 2019; Pagès et al., 2019; Ingraham et al., 2019; Strokach et al., 2020; Baldassarre et al., 2021; Hermosilla et al., 2021; Jing et al., 2020; 2021). In this work, while we use GNNs and Group Invariant GNNs as a part of the model, we note that we can equally replace them with CNNs, LSTMs, Transformers and other models used for proteins without any change in the underlying theory.
Tree Construction Method for Molecules: Jin et al. (2018), in their work JTVAE, propose
- a procedure to construct tree for molecules. We note that JTVAE is a more general purpose approach for constructing trees from molecules and may not be the best suited for proteins where the backbone atoms are largely observed (experimentally) to be rigid.
Generative Models: Protein conformation generation models (Mansimov et al., 2019; Simm et al., 2020; Ganea et al., 2021; Xu et al., 2021b;a; Shi et al., 2021; Luo et al., 2021) have also recently gained attend where the goal of the model is to predict 3d structure of molecules given input 2d structure - our objective in this work is completely different, but can be used to improve predictions of the aforementioned models. While our model is explicitly not a generative model, our framework can be leveraged towards generative modeling with the help of tools such as noise outsourcing (Chapter 6) (Kallenberg, 2006) and we see this as important future work.
Non-Rigid Body Dynamics: Non-Rigid Body Dynamics of objects has long been studied both by physicists and in the fields of computer vision to understand and capture the geometric deformations of objects (Taylor et al., 2010; Masci et al., 2015). To the best of our knowledge, there exists no prior work in deep learning which captures the non rigidity of protein molecules (which cannot be modeled as Ck manifolds). As important future work, we would like to study the impact of leveraging input dependent conditional invariances for modeling other geometric objects (which are Ck manifolds) as well as images and robotics (e.g. the symmetries for a humanoid is different from that of a tractor).
Unrelated Work with similar names: Non classical and conditional symmetries of solutions to ODE’s, PDE’s have been discussed in the past - these works while they share a similar title, have very little in common as we are not dealing with jet spaces or manifolds (Joseph, 1968; Fushchich & Zhdanov, 1992; Olver & Vorob’ev, 1996).
A.7 DETAILS ABOUT DATASETS AND TASKS
In this section, we describe briefly each of the datasets (and their associated tasks). Information about the splits and license information is provided in Table 2.
PSR: This task utilizes data from the structural models submitted to the Critical Assessment of Structure Prediction competition (CASP - Kryshtafovych et al. (2019) - a blind protein structure prediction competition) to rank protein structures from the experimentally determined structure of the protein. The problem is formulated as a regression task, where we predict the global distance test of each structural model from the experimentally determined structure. As prescribed by the dataset authors, the dataset is split by competition years.
MSP: The goal of this task is to identify mutations that stabilize a protein’s interactions which forms an important step towards the design of new proteins. This task is significant as probing mutations experimentally techniques are labor-intensive. Atom3D (Townshend et al., 2020) derives this dataset by collecting single-point mutations from the SKEMPI database (Jankauskaitė et al., 2019) and model each mutation into the structure to produce a mutated structure. The learning problem is then formulated as a binary classification task where the goal is to predict whether the stability of the complex increases as a result of the mutation. We employ the same splits as suggested by the dataset authors wherein the protein complexes are split such that no protein in the test dataset has more than 30% sequence identity with any protein in the training dataset.
LBA: This task deals with the problem of predicting the strength (affinity) of a candidate drug molecule’s interaction with a target protein. The dataset is constructed using the PDBBind database (Wang et al., 2004; Liu et al., 2015), a curated database containing protein-ligand complexes from the PDB and their corresponding binding strengths (affinities). The task is formulated as a regression task with the goal to predict pK = − log10(K), where K is the binding affinity in Molar units. The splits are created such that no protein in the test dataset has more than 30% sequence identity with any protein in the training dataset.
LEP: The shape of protein impacts whether a protein is in an on or off state which plays an important role in predicting the shape a protein will favor during drug design.This dataset is obtained by curating proteins from several families with both “active" and “inactive" state structures, and model in 527 small molecules with known activating or inactivating function using the program Glide (Friesner
et al., 2004). The task is formulated as a binary classification task where the goal is to predict whether a molecule bound to the structures will be an activator of the protein’s function or not. We use the same split as recommended by the ATOM3D authors.
A.8 EXPERIMENTAL SETUP
The code for the baseline models (GVP-GNN (Jing et al., 2021), E(N) GNN (Satorras et al., 2021) and GNN(GCN) (Townshend et al., 2020; Kipf & Welling, 2016)) were used as provided by the authors (licenses as dictated by the code authors). Our conformer invariance implementation is in PyTorch using Python 3.8. We also leverage networkx to create the directed forests. For all three models we tune the hyperparameters – learning rate (∈ {0.1, 0.01, 0.001, 0.0001}) and mini batch size (∈ {4, 8, 16, 32, 64}). For the E(n) GNN model - since there have been no previous models for the aforementioned protein tasks - we also tune the number of GNN layers (∈ {4, 5, 6, 7}) as a hyper parameter. The experiments were all performed on Tesla V100 GPU’s. For more details refer to the code provided.
A.9 ADDITIONAL RESULTS
In Table 3, we present the results, including additional datasets and tasks, than presented in the main paper. In the LEP task, the proposed addition outperforms the baseline models for the E(n) GNN and the GVP-GNN, but not for the GNN(GCN). The LEP result for the GNN(GCN) is an oddity here, where a GNN which doesn’t incorporate any rigid body transformations outperforms all other models.
In Table 4, we present an ablation study, where we provide a strategy where all the transformations are created from “gold standard" Xp rather than via the MCMC method, i.e., the MCMC is restarted at every epoch during training. From the table, we note that the MCMC method tends to outperform the non MCMC method, which can be attributed to the guarantees it provides to the learning framework.
A.10 CASE AGAINST USING AVERAGE REPRESENTATIONS IN TRAINING AND FOR REQUIRING
The case for the requirement of a single conformer invariant representations – can be seen from the fact that different protein conformations, say X1, X2 of the same protein X , may be seen during
train and test phases. Without conformer invariant representations - this may lead to different representations and therefore different predictions for the same protein (bad).
One may argue, that an approximate conformer invariant neural network can be learned by averaging representations of multiple Monte Carlo conformations during training. However, this is computationally expensive as this would need to back-propagate and update parameters for multiple conformations for every protein in every epoch – bad as this leads to an exponential overhead. On the other hand, our procedure with MCGD, in every epoch uses only one conformation from our Markov Chain and still ensures convergence to optimal parameters. It is important to note that, during inference we still do average representations over multiple conformations (from our Markov chain) to output conformer invariant representations - which again due to MCGD training procedure and the unique steady state of our Markov Chain, ensures confomer invariant representations are achieved – which yield better performance than current state of the art on multiple datasets and tasks. As stated in the main paper, our framework can work well with other Markov chains as well. We chose the proposed chain purely because it is simpler than the existing methods (in that it doesn’t require a distribution over conformations to be assumed) and as such is easier to sample from.
A.11 TRAINING TIME
In Table 5, we present the training time (for 100 epochs) for each of the baseline models as well as well for models with our proposed conformer invariance framework addition. From the table, we note that, on an average the increase in training time over 100 epochs is ≈ 3-4 minutes which is negligible in comparison to the training time (without the proposed framework addition). | 1. What is the main contribution of the paper regarding protein representation learning?
2. What are the strengths and weaknesses of the proposed MCMC-based framework?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any minor corrections or suggestions for improving the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper introduces an MCMC-based framework that introduces conditional transformations that respect certain 3d structural properties of proteins with the motivation to learn conformation invariant representation.
Strengths And Weaknesses
Strengths:
MCMC-based strategy for learning representations that are invariant to conditional transformations
Weaknesses:
Most technical details are moved to Appendix to save space and the extra space is used to explain details of these details moved to Appendix, which makes it difficult to follow certain sections of the paper.
Experimental results are included with not much insight. It was not clear whether the improvement over the baseline techniques for four tasks were due to proposed conditional sampling or other aspect of the learning changed by MCGD algorithm.
Clarity, Quality, Novelty And Reproducibility
The paper was too heavy in introducing concepts and very light in explaining or illustrating these concepts. Key details were moved to Appendix and the extra space is used to explain details of details, which makes it harder to follow the ideas in the paper. For example in Page 6 MCGD algorithm is not explained at all and the reader was referred to Appendix A.9 and that extra space is used to explain the convergence properties of MCGD. Even the Appendix A.9 does not discuss what MCGD does and only discusses other details about the algorithm. The reader now needs to check the paper from 2018 to see the key ideas of the MCGD algorithm. Only half a page is used for experiments due to page limitation and yet again the reader is referred to Appendix for additional experiments.
There is some novelty in the MCMC based conformation invariant sampling strategy, but it is hard to judge the significance of this novelty given limited empirical evidence.
Minor Corrections:
Please correct cyro-EM as cryo-EM
Remove a in "using directed a forest"
Remove extra to in "converges to to a unique stationary"
How true it is to say that V is the "set" of atoms given that this set does not contain distinct elements, so it can't be a set? V is the set of nodes (not atoms).
"n" was first used to denote the number of amino acids but later on it was used to denote the number of nodes in V, and thus the number of non-distinct atoms. If m is the number of atoms, then what is n? |
ICLR | Title
Conditional Invariances for Conformer Invariant Protein Representations
Abstract
Representation learning for proteins is an emerging area in geometric deep learning. Recent works have factored in both the relational (atomic bonds) and the geometric aspects (atomic positions) of the task, notably bringing together graph neural networks (GNNs) with neural networks for point clouds. The equivariances and invariances to geometric transformations (group actions such as rotations and translations) so far treat large molecules as rigid structures. However, in many important settings, proteins can co-exist as an ensemble of multiple stable conformations. The conformations of a protein, however, cannot be described as input-independent transformations of the protein: Two proteins may require different sets of transformations in order to describe their set of viable conformations. To address this limitation, we introduce the concept of conditional transformations (CT). CT can capture protein structure, while respecting the constraints on dihedral (torsion) angles and steric repulsions between atoms. We then introduce a Markov chain Monte Carlo framework to learn representations that are invariant to these conditional transformations. Our results show that endowing existing baseline models with these conditional transformations helps improve their performance without sacrificing computational efficiency.
1 INTRODUCTION
The literature on geometric deep learning has achieved much success with neural networks that explicitly model equivariances (or invariances) to group transformations (Cohen & Welling, 2016; Maron et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020). Among applications to physical sciences, group equivariant graph neural networks and transformers have specifically found applications to small molecules, as well as large molecules (e.g. proteins) with tremendous success (Klicpera et al., 2020; Anderson et al., 2019; Fuchs et al., 2020; Hutchinson et al., 2021; Satorras et al., 2021; Batzner et al., 2021). Specifically, machine learning for proteins (and 3D macromolecular structures in general) is a rapidly growing application area in geometric deep learning, (Bronstein et al., 2021; Gerken et al., 2021). Traditionally, proteins have been modeled using standard 3D CNNs (Karimi et al., 2019; Pagès et al., 2019), graph neural networks (GNNs) (Kipf & Welling, 2016; Hamilton et al., 2017), and transformers (Vaswani et al., 2017). More recently, several works Jing et al. (2020; 2021); Jumper et al. (2021); Hermosilla et al. (2021) have enriched the above models with neural networks that are equivariant (invariant) to transformations from the Euclidean and rotation groups. While equivariance (invariance) to the transformations in these groups are necessary properties for the model, unfortunately, they are limited to only capture rigid transformations of the input object.
However, these models may not yet account for all invariances of the input pertinent to the downstream task. And the transformations they are invariant to do not depend on the input. For instance, invariance to the Euclidean group restricts the protein representation to act as if the protein were a rigid structure, regardless of the protein under consideration. However, treating proteins as rigid structures may not be optimal for many downstream tasks. And different proteins may have different types of conformations (protein 3D structures with flexible side chains) (Harder et al., 2010; Gainza et al., 2012; Miao & Cao, 2016) . The existing rigid body assumption in protein representations may hurt these methods in datasets & downstream tasks that require protein representations to be invariant to a specific set of protein conformations. For example, for most proteins, regardless of their side chain conformation (as long as viable) under consideration – their protein fold class/ other scalar properties remain the same, their mutation (in)stability remains unaltered, protein ligand binding affinity (apart from changes at the ligand binding site) remain the same, etc.
In light of this limitation of current methods, a question naturally arises: Is it possible to learn conformer-invariant protein representations?
Our Approach: We propose a representation method where the set of symmetries in the task are input-dependent, which we denote conditional invariances. For our specific application, we model every protein as a directed forest, where every amino acid forms a directed tree. We leverage the constructed directed forest of the protein to sample viable protein conformations (where viability is checked with Protein structure validation tools such as Molprobity (Davis et al., 2007; Chen et al., 2010)), which is additionally coupled with a Markov chain Monte Carlo (MCMC) framework to create data augmentations to train the neural network to learn conformer invariant representations.
Our contributions can be summarized as follows: • We provide guiding principles for defining conditional invariant representations as inductive biases,
which serves as a generalization of group invariant neural networks. • We then provide a principled strategy to sample conformations for any given protein from the
support of its protein conformer distribution (which consists of all its viable protein conformations) which captures their true flexibility. Viable conformations respect domain specific constraints such as those on dihedral angles, steric repulsions, among others. This is particularly useful in the scenario when the set of viable transformations is different across different proteins.
• Further, we develop an MCMC-based learning framework which guides the sampling of protein conformations such that the (asymptotic empirical) average representation obtained by all viable conformations of a protein are identical.
• Finally, we perform experimental evaluation of our proposal, where endowing baseline models with our proposed strategy (via an implicit data augmentation scheme) shows noticeable improvements on multiple different classification and regression tasks on proteins.
2 CONDITIONAL INVARIANCES FOR PROTEINS
The symmetry of an object is the set of transformations that leaves the object invariant. The notion of symmetries, expressed through the action of a group on functions defined on some domain, has served as a fundamental concept of geometric deep learning. In this section, we start by defining the concept of input-dependent conditional symmetries and then proceed to specifically tailor these conditional symmetries that are both computationally efficient and useful for representing protein conformations. Some group theoretic preliminaries are presented in the Appendix A.2. Definition 2.1 (Conditionally symmetric-invariant functions). A function f : Ω→ R, is said to be conditionally symmetric-invariant if
f(tx · x) = f(x) ,∀tx ∈ Sx, ∀x ∈ Ω, where Sx is a set of transformations unique to element x and tx : Ω→ Ω.
It is easy to see that conditionally invariant functions are a generalization of group invariant functions where Sx ≡ G ∀x ∈ Ω where G is a group. The above definition is motivated by the fact that representations for proteins may not necessarily be limited to be invariant just to group actions, but to a more general set of protein specific transformations. We detail this motivation next.
A protein molecule is composed of amino acids (say n amino acids), where each atom in the protein belongs to one amino acid ∈ {1, . . . , n}. Excluding the hydrogen atoms, every amino acid contains four atoms known as the backbone atoms (see Figure 1), and other atoms which belong to the side chain. Traditionally, protein structures are solved by X-ray crystallography or cryo-EM and the obtained structure is normally considered a unique 3D conformation of the molecule. Molecules available via the Protein Data Bank (Berman et al., 2000) generally include only unique sets of coordinates to demonstrate the 3D structures, which lead to the unique answer as ‘gold standard’ in structure prediction, such as protein structure prediction competitions - CASP (Kryshtafovych et al., 2019). Under these assumptions protein side-chain conformations are usually assumed to be clustered into rotamers, which are rigid conformations represented by discrete side-chain dihedral angles.
However, works over the past decade (Harder et al., 2010; Gainza et al., 2012; Miao & Cao, 2016) have observed that — while the backbone atoms of all n amino acids in a protein molecule together (i.e. 4n atoms) form a rigid structure, its side chains can exhibit flexible (continuous and discrete) conformations beyond clustered rotamers. The underlying goal of our work is to capture this inherent flexibility of proteins and to ensure different conformers of the same protein get identical representations (More details – Appendix A.10). The above problem of capturing protein conformations is further compounded by the fact that protein conformations are often times unique to the protein - i.e. conformations exhibited by a protein are input (protein) specific. The desiderata, therefore is a model which outputs conformation invariant representations when the conformations (symmetries) exhibited by a protein varies across proteins when access to only a single protein conformer is available.
We denote a protein as a tuple p = (V,Xs,Xp) (data from pdb files) where V is the set of nodes (every atom is a node) in the protein, Xs ∈ Rm×d, d ≥ 1 is the scalar atom feature matrix associated with the atoms, Xp ∈ Rm×3 (where m denotes the number of atoms in the protein) are the positional coordinates associated with the atoms in the protein. Without loss of generality, we number the nodes in V = {1, . . . ,m} following the same ordering of the rows in Xs,Xp.
Definition 2.2 (Rigid Backbone Protein Conformations). For an m-atom protein p with atom positional coordinates Xp ∈ Rm×3 (recall that by definition p contains information about Xp), where Cp ⊂ Rm×3 denotes the set of viable conformations of p, which keeps the positional coordinates associated with the backbone fixed. We use Tp ⊂ Rm×3×3 (where a 3x3 matrix is associated with every single atom) to denote the set of non-isometric transformations of p, which yield the set Cp i.e. ∀cp ∈ Cp, ∃tp ∈ Tp, s.t. for an atom with index i ∈ {1, . . . ,m}, the new position of atom i is Xp[i]tp[i] T = cp[i], Xp[i] ∈ R1×3, tp[i] ∈ R3×3.
Tp forms a set of transformations which acts on the protein atomic coordinates via matrix multiplication. Two elements of Tp (with corresponding matrices of individual atoms and then aggregating for the protein as a whole) may or may not combine via matrix multiplication (as the binary operation) to result in another element of Tp i.e. matrix multiplication is not necessarily closed in Tp. A concrete example of the non group structure of protein conformations is provided in Appendix A.3.
Unfortunately, sampling transformations from Tp to obtain viable protein conformations is an unattainable task, since this would entail first sampling a transformation from Rm×3×3 and then verifying if the transformation is viable (and the vast majority of transformations are not viable). We will address this issue using an MCMC method - which is then combined with MCGD training of the neural network (Sun et al., 2018) to learn conformation invariant protein representations where different viable conformations are obtained via MCMC are used in every batch. Markov Chain Gradient Descent (MCGD) is a variant of Stochastic Gradient Descent (SGD), where the samples are drawn from the trajectory of a Markov Chain. We note that, while there have been prior Monte Carlo and MCMC based techniques for sampling protein conformations (Boomsma et al., 2013; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009), the overarching goals of the existing MCMC methods are significantly different compared to ours. The existing methods define a distribution over the conformations (based on the properties of the bonds, etc.) and then sample highly probable conformations from this distribution. Our method, is much simpler and only requires that the chain being used is ergodic and is invariant to the actual form of the distribution. Therefore, we devise a simple chain whose transitions are easier to sample and don’t require us to compute conditionals of an energy based model. However, we also note that the existing Markov chains can be used as drop-in replacements to sample conformations as part of our framework. Before specifying our MCMC procedure, we specify the invariance requirements for learning protein representations.
A symmetric-invariant function for proteins (per Definition 2.1), should be invariant to (i) transformations which change its atomic coordinates to any of its rigid backbone conformations in Cp (ii) transformations which change the atomic coordinates of all its atoms via rigid body transformations — group actions such as rotations/ translations (iii) a combination of the two above which transforms it into one of its viable conformations and is then further acted upon by rigid body transformations.
3 OBTAINING VIABLE CONFORMATIONS
Before we describe our MCMC procedure to sample atomic coordinates from the set of rigid backbone conformations , we need a way to sample from the local neighborhood of a conformation.
i = SO(3)∀i 6= 1. We note that in protein molecules not all transformations about a
node would be allowed due to steric repulsions between atoms as well as potential overlaps of atoms.
3.1 A SIMPLE PROTEIN CONFORMATION SAMPLING STRATEGY
Akin to Irbäck & Mohanty (2006), to efficiently sample a candidate conformation from Cp, we will follow a multi-step approach: First, we (i) construct a directed tree for every amino acid in the protein to capture the inherent flexibility in protein structures. Then, we (ii) leverage a directed forest of the protein (described next) to transform its atomic coordinates and check for viability.
Part 1. Directed forest construction. Using the input protein molecule, we construct a directed forest using the following three step procedure: 1. Each amino acid in the protein gets its own directed tree. The atoms in the amino acid’s backbone
form the base (root) nodes (i.e., there can be multiple base nodes in every amino acid). This is illustrated as node 1 in Figure 2’s tree. The set of base nodes of all the amino acids in the protein are jointly referred to as the base nodes of the protein (or equivalently of the directed forest).
2. The atoms adjacent to the amino acid’s backbone via a covalent bond become their immediate children in the tree. For example, the node SC1 forms the child of αC in the Figure 1.
3. The other covalent bonds of the children establish the grandchildren of the root, and so on, until all the atoms in the amino acid are a part of the tree. For example, SC2 and SC3 become the children of the node SC1 in the directed tree constructed for Figure 1.
Figure 2 is an illustration of some directed tree constructed with the above procedure, where node 1 is the root node, with nodes 2, 4 as its children and so on. It is important to note that, the above procedure ensures that in the constructed tree, each (non-root) node has a single parent and may have arbitrarily many children. Further, cycles/rings which are present in the molecule are broken on the basis of bond length, i.e., larger bonds are chosen before smaller bonds. Ties between same-length bonds are broken arbitrarily. While multiple directed forests can be created for a given protein since bonds of equal length are broken arbitrarily, in our implementation, given a protein, the tree is fixed because the directed forest for every protein is created exactly once (in the preprocessing step) - where the ties are broken deterministically based on their ordering in the “pdb file".
Next, we shall look at how the atomic coordinates are transformed using the directed tree.
Part 2. Conditional transformations of atomic coordinates. Let Xp be the available input atomic coordinates of the protein. To obtain a new candidate protein conformation (say X ′p ∈ Rm×3), we sample uniformly, one of the directed trees from the forest. Next, we transform the atomic coordinates of the subset of atoms in this directed tree. This set is denoted by A ⊂ V . That is, in the candidate conformation, X ′p[i] = Xp[i], ∀i ∈ V \A — or equivalently tp[i] = I, ∀i ∈ V \A. Starting with the root node, we use breadth first search to traverse the directed tree and update all the descendants of the node currently being considered. For the backbone atoms, we leave Xp[i] = X ′p[i] (or equivalently tp[i] = I), i.e., atomic coordinates are unchanged for backbone atoms. Next, we use pointed sets to describe approximate transformations on a given protein through a directed tree .
Definition 3.1 (Pointed sets). A pointed set is an ordered pair (X,x0), where X is a set and x0 ∈ X is called the basepoint.
For instance, let Mi = {Xp[j] : j ∈ Ni}, where Ni is the set containing i and all its descendants in its directed tree. Then, (Mi,Xp[i]) is a pointed set.
Let the parent of node i be denoted by ◦ and let G(g◦·Xp[◦])i (in practice, SO(3)) be the group from which actions gi ∈ G (g◦·Xp[◦]) i are sampled uniformly — that can transform the positional coordinates of node i and its descendants about its parents in the directed tree, where g◦ ·Xp[◦] is the transformed atomic coordinates of the parent of node i (we use BFS - so a top down approach). In the case of the root node (for example 1 in Figure 2), the actions are just drawn from Groot (or G1 in Figure 2).
The associated transformations of the atomic coordinates of the atoms in Ni can be given by the mapping hi : (Mi,Xp[i])→ (R|Ni|×3, g◦ ·Xp[◦] + gi · (Xp[i]−Xp[◦])) where the coordinates of node j ∈ Ni are updated as: X ′p[j]← g◦ ·Xp[◦] + gi · (Xp[j]−Xp[◦]) (1) If the node j 6= i, its coordinates, i.e., X ′p[j] can be further updated as we traverse the tree. When G
(g◦·Xp[◦]) i = SO(3) ∀i ∈ V , every node in the directed tree can be rotated about its parents as
shown in Figure 3 (Appendix A.4) - which is the side chain of the amino acid shown in Figure 1.
Algorithm 1 Sampling a viable conformation of cp 1: Input: Conformation cp of protein p 2: Obtain the directed forest corresponding to the protein with n directed trees where n is the
number of amino acids in the protein. 3: repeat 4: Sample y ∼ UNIF({1, 2, · · · , n}) and obtain the corresponding directed tree 5: Traverse through directed tree via BFS and update atomic coordinates via Equation (1) to
obtain c′p a candidate conformation of cp— where group actions are always sampled uniformly from SO(3).
6: Check if c′p is a valid protein conformation via protein structure validation tools 7: until c′p is a valid conformation 8: Return: c′p, a valid conformation of cp
However, each candidate X ′p obtained via the above transformation process may not belong to Cp. To that end protein structural validation tools may be used to check the validity of these candidates. Molprobity (Davis et al., 2007; Chen et al., 2010; Williams et al., 2018) is such a tool,which outputs scores corresponding to different metrics (such as number of dihedral (torsion) angles outside the allowed threshold, number of atom-atom clashes arising due to steric repulsions, etc) which can be compared with the originally provided conformation Xp. Note, that while Molprobity is not perfect and we may end up developing models more invariant than just to viable conformations, alternatively (more precise) tools including those which allow for the evaluation of molecular forcefields, can be used to check validity. Additionally, the goal of this work is not to use Molprobity/ other protein structure validation tools as a part of generative models and a more invariant model does not necessarily affect performance on unseen but valid protein test data.
Our algorithm to sample viable conformations is summarized in Algorithm 1. The repeat-loop proposes a conformation in the neighborhood of the current conformation cp which is only accepted when the proposed conformation c′p is a valid conformation. As such it is an acceptance-rejection sampling algorithm. We will see later that the actual sampling probability distribution is not required to be known by our procedure. Our MCMC procedure defined next only requires Algorithm 1 to follow certain conditions which we argue it follows.
3.2 SAMPLING CONFORMERS VIA MCMC
We now describe the MCMC procedure that, starting at any conformer configuration c(0)p ∈ Cp we will, in steady state, sample conformations according to a unique stationary distribution which depends on the protein p, where Cp is the set of all viable conformations of p as defined in Definition 2.2.
To that end we first define the transition kernel of the Markov chain. Algorithm 1 samples a conformation c′p in the neighborhood of a given conformation cp using a directed forest. The neighborhoodN (cp) of the conformation cp is loosely defined as the set of all possible conformations c′p which may result from running Algorithm 1 with cp as input. We denote the probability distribution induced by Algorithm 1 as the transition kernel κ(c′p|cp) which has support N (cp). Definition 3.2 (Conformer Sampling Markov Chain (CSMC) Φp). We define the conformer sampling Markov chain Φp as a time-homogenous Markov chain over the state space Cp with transition kernel
κ as defined above, where Cp is the set of valid conformations associated with given protein p as defined in Definition 2.2.
The consistency of our learning procedure does not depend on the precise definition of κ. However, our procedure relies on the fact that any conformer c′p ∈ Cp can be sampled by κ in a finite number of steps, starting from a conformer cp. We use this fact to show that the chain Φp converges to a unique stationary distribution regardless of the initial conformer. (All proofs in Appendix A.5). Proposition 3.3. Given the CSMC Φp from Definition 3.2 whose transitions are governed by κ which is implicitly defined by Algorithm 1, for any pair of conformers cp, c′p ∈ Cp, there exists τp < ∞, independent of cp, such that P τp Φp (cp, c ′ p) > 0, where P τp Φp is the τp step transition probability.
Next, we show the existence of a unique steady state distribution of our Markov chain. Proposition 3.4. The CSMC Φp defined in Definition 3.2 is uniformly ergodic if Proposition 3.3 is satisfied. Specifically there exists a unique steady state distribution πp such that for all cp ∈ Cp, ‖PnΦp(cp, ·)− πp(·)‖ ≤ C R
n, where C <∞ and R < 1 are constants that depend on Φp, PnΦp is the n step transition probability and ‖ · ‖ is the `1 norm.
Given that we now have the ability to draw samples from a Markov chain which achieves a unique stationary distribution πp on Cp, we shall leverage this next, in our learning framework to learn conformer invariant representations of proteins.
4 LEARNING FRAMEWORK
We shall employ a learning strategy, where we use the viable conformations obtained via the Markov chain to learn a function which outputs conformer invariant representations.
Let D = {(xj , yj)}Nj=1 be the input data (where we have a single conformer for every protein in the dataset). We shall consider a single data point (xj , yj) henceforth, where xj = (V,Xs,Xp) is the input protein. We consider a supervised learning setting where yj is the associated target. We consider both classification and regression tasks.
Let Cj = {cji} be the set of viable conformations of protein xj . We only consider viable conformations which defer only in the atomic coordinates matrix Xp. For a given protein xj , we shall use xji to denote protein xj but with Xp of the original protein modified by cji , and use Sj = {xji} to denote the set of all viable xji i.e. the state space.
Let f : Ω → Rd, d > 0 be any function with learnable parameters θf , (for e.g. any neural network model such as 3D CNN, GNN, Transformer, LSTM, etc.) which takes a protein (in the form (V,Xs,Xp)) as input and outputs protein representations. The function f need not necessarily output conformer invariant representations of the protein xj . Then, a simple way to obtain conformer invariant representations of protein xj (apart from using trivial functions such as a constant function or function independent of Xp) is computing its expected value over all its viable conformations. We shall denote f to denote a function which outputs conformer invariant representations of a protein.
f(xj ; θ f ) = EX∼πj [f(X; θf )] (2)
where πj(·) is the steady state distribution of the Markov chain associated with the protein xj . As such, f(xj ; θf ) = f(x′j ; θ
f ) for any viable protein conformation x′j ∈ Sj . Subsequently, to learn an optimal f (which we denote by f?), we wish to minimize the loss, defined as follows:
L(D; θf , θρ) = 1 N N∑ j=1 L̂(yi, ρ(f(xj ; θ f ); θρ))
= lim k→∞
1
N N∑ j=1 L̂(yi, ρ( 1 k k∑ i=1 f(xij ; θ f ); θρ)) (3)
where L̂ is a convex loss function e.g. cross entropy loss, ρ is some differentiable function with parameters θρ (in practice, an MLP) and lim
k→∞ 1 k
∑k i=1 f(x i j ; θ f ) can be employed as an asymp-
totically unbiased and consistent estimate of f where xij is the i th sample from the Markov chain corresponding to the protein xj
Since Equation (3) is computationally intractable, we employ a surrogate for the loss given by:
J(D; θf , θρ) = lim k→∞
1
N N∑ j=1 1 k k∑ i=1 L̂(yi, ρ(f(x k j ; θ f ); θρ)) (4)
Observe that in Equation (4), the expectation over the conformations is now outside the L̂ and ρ functions while still remaining conformation invariant (however, the optimal parameters corresponding to minimizing J are different from minimizing L). Following (Murphy et al., 2019b;a), we note that, when ρ is the identity function, Equation (4) serves as an upper bound for Equation (3) via Jensen’s inequality. However, learning representations by averaging over multiple conformations (in training) is still computationally expensive (back-prop and high variance issues). Next, we show how MCGD (Sun et al., 2018) can be leveraged to make this tractable (more details in Appendix A.10).
Making the case for simplicity, we next show convergence properties using the full batch setting. Note that is procedure and proofs are equally applicable in the mini-batch setting with minor modifications.
Full-batch training: We consider the data as a single batch B = {(x1, y1), ..., (xj , yj), ..., (xN , yN )} and for a data point (xj , yj), denote the corresponding steady state distributions of the corresponding Markov chains using πj and the state space as Sj .
We denote all learnable parameters of ρ ◦ f as θ ≡ (θf , θρ) and consider θ ∈ Θ ⊆ Rm, m > 0 such that the function ρ ◦ f may be non-convex for some values of θ. We consider a sequence of step sizes (γk)∞k=0 which satisfy:∑
k γk = +∞, ∑ k ln k · γ2k < +∞ (5)
and follow a gradient scheme where parameters are updated as:
θk = θk−1 − γkZk, k > 0 (6)
where Zk = 1N N∑ j=1 ∇θL̂(yj , ρ(f(xk−1j ; θ f k−1); θ ρ k−1)) and x k−1 j is the k − 1 th sample from the Markov chain corresponding to the data point (xj , yj) and employ the Markov chain gradient descent procedure (Sun et al., 2018) where the loss in epoch k, k > 0 of training is obtained by
Ĵk = 1
N N∑ j=1 L̂(yj , ρ(f(x k−1 j ; θ f k−1); θ ρ k−1)).
and minimizes the surrogate loss given in Equation (4) when k is sufficiently large so that the Markov chain has converged to its unique steady state distribution.
Next, we list the set of assumptions we make to ensure convergence of our conformer invariant MCGD procedure. Assumption 4.1. We make the following assumptions:
1. For any θ ∈ Θ and xkj ∈ Sj , the function f is differentiable ∀j 2. supθ∈Θ,xkj∈Sj{||∇θ ρ ◦ f(x k j )||} < +∞ i.e. the gradients are bounded. 3. ∀xkj ∈ Sj , ∀θ1, θ2 ∈ Θ, ||∇θ1 ρ ◦ f(xkj )−∇θ2 ρ ◦ f(xkj )|| < L||θ1 − θ2|| for some L ≥ 0 i.e., the gradients are L−Lipschitz. 4. Exkj∼πj [∇θ ρ ◦ f(x k j )] = ∇θ Exkj∼πj [ρ ◦ f(x k j )]
Next, we formally state the convergence of our optimization procedure to optimal parameters θ? which yield conformer invariant protein representations. Proposition 4.2. Let the step sizes satisfy (5) and the function parameters θ be updated as (6) and Assumption 4.1 hold, then the MCGD optimization enjoys properties of almost sure convergence to the optimal θ.
The use of Markov chain gradient descent procedure to optimize the conformer invariant learning procedure optimizes the objective J in Equation (4), and thus has the following implication on how outputs should be calculated at inference time:
Inference time: We estimate f using an empirical average of f evaluated over conformations (in practice, average final layer [before softmax] representations) visited by the Markov chain :
ˆ fk(xj ; θ f ) = 1
k k∑ i=1 f(x (i) j ; θ f ) , (7)
where x(i)j is the ith state visited by the Markov chain started at xj . Since the Markov chain has a unique steady state distribution, ˆfk(xj ; θ f ) is both asymptotically unbiased and consistent. That is
lim k→∞
ˆ fk(xj ; θ f ) a.s. = f(xj ; θ f ).
5 RESULTS
We evaluate our proposed augmentation procedure on multiple different tasks from the ATOM3D dataset (Townshend et al., 2020). Results are provided in Table 1.
TASKS ON ATOM3D DATASETS
ATOM3D (Townshend et al., 2020) is a unified collection of datasets concerning the 3D structure of proteins. These datasets are specifically designed to provide a benchmark for ML methods which operate on 3D molecular structure, and represent a variety of important structural, functional, and engineering tasks. As part of our evaluation, we perform the (a) Protein Structure Ranking (PSR) (b) Ligand Efficacy Prediction (LEP) (c) Protein Mutation Stability Prediction (MSP) (d) Ligand Binding Affinity (LBA). We describe each of the tasks in detail in Appendix A.7. For all the datasets and tasks we report the same metrics as proposed by the authors in Townshend et al. (2020).
MODEL, BASELINES AND DISCUSSION
We endow the vector gated GVP-GNN model (Jing et al., 2021), a GNN using GCN (Kipf & Welling, 2016) layers and the E(n) GNN Satorras et al. (2021) with conditional transformations from our proposed MCMC method. It is important to note that when the positions of the atoms are altered the protein graph which are inputs to the base encoders are changed appropriately in every epoch. As an ablation study, we also provide a strategy where all the transformations are created from “gold standard" Xp rather than via the MCMC method, i.e., the MCMC is restarted at every epoch during training in the Appendix.
Looking at Table 1, we note that models augmented with our proposed conformer invariance framework, in general, outperforms the baseline models in multiple tasks for which conformer invariance is a requirement. In the LEP task, the proposed addition outperforms the baseline models for the E(n) GNN and the GVP-GNN, but not for the GNN(GCN). The LEP result for the GNN(GCN) is an oddity here, where a GNN which doesn’t explicitly incorporate any atomic positional information outperforms all other models with information about atomic coordinates.
Additional results and ablation studies are presented in Appendix A.9.
COMPUTATIONAL COMPLEXITY & SCALABILITY:
The directed forest for every protein is computed only once as a preprocessing step (a fixed ≈ 5-10 minutes per dataset). At every epoch, for a given protein, we select only one among all its constituent amino acids & perform a sequence of 3× 3 matrix multiplications to obtain a new conformer. If k denotes the number of atoms in the side chain of an amino acid, and height of a directed tree is in the order of log(k), the computational complexity to obtain a conformation is O(k log k) (where k ≈ 15
in avg in our datasets), & this can be run in parallel for all proteins in a mini-batch. Molprobity (run in parallel again, currently runs externally on a web server via a command line call) adds 2s delay per minibatch (we could reduce to milliseconds if we integrated Molprobity into our code). The rejection rate from Molprobity is also very small (less than 1 reject in average across 100 molprobity calls). As an example, training (100 epochs) GVP-GNN - MSP requires ≈ 45 min, and LBA requires ≈ 34 min. In both cases our method requires an additional 4 min overhead. A complete training time table (on a Nvidia Tesla V100 GPU) for all models & datasets can be found in Appendix A.11.6 RELATED WORK Here, we present a high level summary of works related to ours. A more detailed comparison to the works mentioned below and others) are presented in Appendix A.6.
Group Equivariant Neural Networks: Group equivariant and invariant neural networks (Cohen & Welling, 2016; Lenssen et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020; Hutchinson et al., 2021; Fuchs et al., 2020; 2021; Dehmamy et al., 2020) help capture discrete and continuous symmetries of elements (e.g. images) by introducing group theoretic constraints as inductive biases in the neural network. While group equivariant neural network capture global symmetries, local symmetries of manifold spaces can be captured via gauge equivariant networks (Cohen et al., 2019; De Haan et al., 2020). (Lenssen et al., 2018; Gerken et al., 2021; Bronstein et al., 2021) provide a complete review of the theoretical aspects and applications of group equivariant neural networks.
Graph Neural Networks: Graph Neural Networks (Kipf & Welling, 2016; Hamilton et al., 2017; Battaglia et al., 2018; Xu et al., 2018) have gained renewed focus over the past few years and have found applications in recommender systems, biology, chemistry, and many other real world problems which can be formulated as graphs and currently serve as the state of the art in majority of node and graph classification/ regression tasks. Graph neural networks work on the principles of permutation equivariance/ invariances (also groups) and have exploited a message passing framework to learn powerful and expressive representations of nodes/ graphs.
Group Equivariant Graph Neural Networks: These networks combine continuous symmetries (lie groups) with permutation equivariances and has found applications with resounding success on small molecules (Anderson et al., 2019; Klicpera et al., 2020; Satorras et al., 2021; Batzner et al., 2021) which exhibit rigid body characteristics. Employing (Farina & Slade, 2021), would make the neural network excessively invariant and allow the protein to be more flexible (allows unviable conformations) than it truly is. More recently, they have also been applied to learning representations of proteins, which is discussed below.
Monte Carlo and MCMC Methods for Sampling Protein Conformations: There has been a lot of prior work – (Boomsma et al., 2013; Olsson et al., 2013; Antonov et al., 2016; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009) which sample protein conformations by either defining/inheriting a distribution over the conformations (majorly based on the properties of the bonds, etc.) These methods can seamlessly be plugged into our framework as long as the MCMC is ergodic.
Neural Networks for Representation Learning of Proteins: Protein representation has gained a lot of attention especially with the tremendous successes of Alphafold and Alphafold2. A variety of neural network architectures including 3D CNNs, LSTM’s and Transformers (treating the protein as a sequence) as well as graph neural networks have been employed to exploit the rigid body symmetries of proteins (Karimi et al., 2019; Pagès et al., 2019; Ingraham et al., 2019; Strokach et al., 2020; Baldassarre et al., 2021; Hermosilla et al., 2021; Jing et al., 2020; 2021).
Generative Models: Conformation generation models (Mansimov et al., 2019; Simm et al., 2020; Ganea et al., 2021; Xu et al., 2021b;a; Shi et al., 2021; Luo et al., 2021) have also gained attention recently with the goal to predict 3d structure of molecules given their 2d structure - our objective in this work is very different, but can be used to improve predictions of the aforementioned models.
7 CONCLUSIONS
This work addresses the limitations of current protein representation learning methods which are unable to learn conformer invariant representations — and hence unable to capture the inherent flexibility present in protein side chains pertinent to many downstream tasks. To address these, we introduced conditional transformations to capture protein structure, while respecting the restrictions posed by constraints on dihedral (torsion) angles and steric repulsions between atoms. Subsequently, we introduced a Markov chain Monte Carlo based framework to learn representations that are invariant to these conditional transformations.
A APPENDIX
A.1 COMMONLY USED ACRONYMS IN THE PAPER
1. G - group
2. Sx - Set of transformations unique to element x
3. tx - Transformation action which transforms the element x
4. m - Number of atoms in the protein under consideration
5. n - Number of amino acids in the protein under consideration
6. Xs - Matrix of Scalar features associated with atoms in the protein p
7. Xp - Matrix of atomic coordinates associated with atoms in the protein p
8. Cp - Set of conformations of protein p
9. SO(3) - Group of all rotations in 3D space
A.2 GROUP THEORY PRELIMINARIES
Definition A.1 (Group). A group is a setG equipped with a binary operation · : G×G→ G obeying the following axioms:
• for all g1, g2 ∈ G, g1 · g2 ∈ G (closure).
• for all g1, g2, g3 ∈ G, g1 · (g2 · g3) = (g1 · g2) · g3 (associativity).
• there is a unique e ∈ G such that e · g = g · e = g for all g ∈ G (identity).
• for all g ∈ G there exists g−1 ∈ G such that g · g−1 = g−1 · g = e (inverse). Definition A.2 (Group invariant functions). Let G be a group acting on vector space V . We say that a function f : V → R is G-invariant if f(g · x) = f(x) ∀x ∈ V, g ∈ G. Definition A.3 ((Left) Group Action). For a group G with identity element e, and X is a set, a (left) group action α of G on X is a function α : G×X → X that satisfies the following two conditions:
1. Identity: α(e, x) = x, ∀x ∈ X
2. Compatibility: α(g, α(h, x)) = α(gh, x)
We will use a short hand of g · x for α(g, x) when the action being considered is clear from context.
A.3 EXAMPLE DEMONSTRATING NON GROUP STRUCTURE OF A SET OF PROTEIN CONFORMATIONS
Consider X to be a protein with n atoms and m amino acids and let the set of viable conformations of X as {X1, X2, . . . Xi, . . . Xp}. Let X1 be the conformation available in our dataset.
In each step of the transition, only a few atoms (<< n) (atoms in a single amino acid of the entire protein- where the protein is made up of multiple hundreds of amino acids (m) in general) are subjected to an action from the SO(3) group here. The positions of all other atoms (outside that amino acid) in the side chain remain unaltered. So, in the Rn×3×3 matrix – most of the 3x3 entries are the identity matrix of 3 dimension.
Let T i1 ∈ Rn×3×3 where i ∈ {1, 2, . . . , p} be the transformation which yields conformation Xi from X1.
Now consider T 21 (subscript of 1 since we start conformation X1) which takes X1 to X2 and T 31 which takes X1 to X3 where T 2 1 and T 3 1 , do not act on the same amino acid in the protein. Now, however, consider the case where performing T 21 and T 3 1 (or the other way around) sequentially would result in a case where atoms in two different amino acids would overlap (or come too
close to each other causing steric repulsion) - therefore resulting in a non viable conformation i.e. a composition of T 21 and T 3 1 acting on X1, would not be present in the set of all allowed transformations T1 = {T 11 , . . . T i1, . . . T p 1 }. Therefore the set T1 is not closed and doesn’t form a group.
Also for two different conformations X1 and X2, their allowed transformation sets T1, T2 will not be identical (in the above example T 31 6∈ T2). It also easy to construct a case where ⋃p i=1 Ti is not a group and all actions from this group acting on any given Xi will not necessarily result in viable/ valid conformation.
A.4 FLEXIBILITY ALLOWED BY OUR PROPOSED MODEL
Our proposed model allows every node in the directed tree to be rotated about its parents. For example, for the side chain shown in Figure 1 (Main Paper), the allowed flexibility is Figure 3.
A.5 PROOFS OF PROPOSITIONS
First, we restate and prove Proposition 3.3.
Proposition A.4. Given the CSMC Φp from Definition 3.2 whose transitions are governed by κ which is implicitly defined by Algorithm 1 as described above. For any pair of conformers cp, c′p ∈ Cp, there exists τp < ∞, independent of cp, such that P τp Φp (cp, c ′ p) > 0, where P τp Φp
is the τp step transition probability.
Proof. Proof by construction. We prove the proposition by showing that one can construct a path (c (1) p = cp, . . . , c (t) p = c′p) such that c (i) p ∈ Cp and κ(c(i+1)p |c(i)p ) > 0, for all 0 < i < t, and t ≤ Tp. The trivial case where cp ≡ c′p is proved since every group contains the identity element — sampling the identity element for every node in the directed tree yields the same conformation. Since we consider only non backbone transforming conformations, for the non trivial case, a maximum of m− 4n atoms can differ in positions between any two conformations - where m is the number of atoms in the protein and n is the the number of amino acids in the protein n. Both m,n are finite and we are dealing with continuous conformers (and continuous group actions about every node — groups are closed under their associated binary action, and SO(3) is path connected). So we can traverse between conformers (until we reach the desired conformer) sequentially in a finite number of steps, by using the constructed directed forest - selecting a amino acid (which doesn’t violate the viability), fixing the positions of all other amino acids in the protein and rotating the side chain atoms in a single conformer to the final desired state. While this process may result in some side chains
being visited multiple times (due to viability constraints), considering continuous conformers and the SO(3) group (which is path connected) ensures we will never reach a state of deadlock. The second condition is satisfied because every group action has an inverse and we only use transformations from SO(3) for every node in the directed trees.
Next, we restate and and prove Proposition 3.4 Proposition A.5. The CSMC Φp defined in Definition 3.2 is uniformly ergodic if Proposition 3.3 is satisfied. Specifically there exists a unique steady state distribution πp such that for all cp ∈ Cp, ‖PnΦp(cp, ·)− πp(·)‖ ≤ C R
n, where C <∞ and R < 1 are constants that depend on Φp, PnΦp is the n step transition probability and ‖ · ‖ is the `1 norm.
Proof. By Proposition 3.3, Φp satisfies Doeblin’s condition as defined in page 396 of Meyn & Tweedie (2012) which states that for cp, c′p ∈ Cp, P Tp Φp (cp, c ′ p) > for some > 0
1. The uniform ergodicity then holds due to Theorems 16.2.3 and 16.2.1 from Meyn & Tweedie (2012).
Next, we restate and prove Proposition 4.2. We also restate the required assumptions for the proposition. Assumption A.6. We make the following assumptions:
1. For any θ ∈ Θ and xkj ∈ Sj , the function f is differentiable ∀j
2. supθ∈Θ,xkj∈Sj{||∇θ ρ ◦ f(x k j )||} < +∞ i.e. the gradients are bounded.
3. ∀xkj ∈ Sj , ∀θ1, θ2 ∈ Θ, ||∇θ1 ρ ◦ f(xkj )−∇θ2 ρ ◦ f(xkj )|| < L||θ1 − θ2|| for some L ≥ 0 i.e., the gradients are L−Lipschitz.
4. Exkj∼πj [∇θ ρ ◦ f(x k j )] = ∇θ Exkj∼πj [ρ ◦ f(x k j )]
Proposition A.7. Let the step sizes satisfy (5) and the function parameters θ be updated as (6) and Assumption 4.1 hold, then the MCGD optimization enjoys properties of almost sure convergence to the optimal θ.
Proof. Given that each protein has an associated time homogeneous Markov Chain with a unique steady state, independent of other proteins, the set of proteins in a mini-batch also form a Markov chain with a unique steady state. We then leverage Corollary 2 (Page 12) of Sun et al. (2018) along with Proposition 4.2 to ensure almost sure convergence to the optimal θ.
A.6 EXTENDED RELATED WORK
Here, we elaborate on the related works section in Section 6
Group Equivariant and Invariant Neural Networks: Group equivariant and invariant neural networks (Cohen & Welling, 2016; Lenssen et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020; Hutchinson et al., 2021; Fuchs et al., 2020; 2021; Dehmamy et al., 2020) help capture discrete and continuous groups symmetries of elements (e.g. images, point clouds). In this work, we learn representations which are invariant to symmetries which are not just groups, but to input dependent sets of transformations. To the best of our knowledge, our work is the first to consider conditional (input dependent) invariances. Prior works on learning invariant models have leveraged Monte Carlo procedures (Finzi et al., 2020; Murphy et al., 2019b) to learn to be invariant to transformations of the input. Alternatively, our work constructs a Markov Chain with a unique steady state and leverages MCGD (Sun et al., 2018) to make it computationally tractable. Secondly, the aformentioned approaches are limited to being invariant to transformations from any specified Lie group with a surjective exponential map/ permutation group, while our work is not limited to groups. Thirdly, our theory ensures that every example/ object in the dataset can have a different set of input dependent, conditional transformations
1We note that this is a simplified version of the actual statement which is defined on the σ-algebra over Cp denoted by σ(Cp). Our proof holds when c′p ∈ σ(Cp)
- and in fact can be seen as a generalization of the above works. In fact, the ablation study that we perform (Results provided in Appendix A.9 - Table 4) uses a Monte Carlo estimator and our MCMC procedure yields better performance than the Monte Carlo estimator. While group equivariant neural network capture global symmetries, local symmetries of manifold spaces can be captured via gauge equivariant networks (Cohen et al., 2019; De Haan et al., 2020). Gauge symmetries require the manifold to be smooth – which is not the case for proteins. Moreover, different proteins have different sets of viable conformations, which would not be able to captured by standard gauge equivariant neural networks. (Lenssen et al., 2018; Gerken et al., 2021; Bronstein et al., 2021) provide a complete review of the theoretical aspects and a wide variety of applications of group equivariant neural networks. A more comprehensive theoretical analysis of input dependent conditionally invariant neural networks is planned for future work.
Graph Neural Networks: Graph Neural Networks (GNNs) (Kipf & Welling, 2016; Hamilton et al., 2017; Battaglia et al., 2018; Xu et al., 2018) have gained renewed focus over the past few years and have found applications in recommender systems, biology, chemistry, and many other real world problems which can be formulated as graphs and currently serve as the state of the art in majority of node and graph classification/ regression tasks. Graph neural networks work on the principles of permutation equivariance/ invariances (Murphy et al., 2019a) (also groups) and have exploited a message passing framework to learn powerful and expressive representations of nodes/ graphs. Both molecular graphs (both small molecules and macromolecules) as well as graphs based on intra molecular distances have been used with GNNs, to achieve state of the art for many molecular datasets and tasks (Hu et al., 2020; Morris et al., 2020). Here, we leverage three graph based neural networks as baselines for our model.
Group Equivariant Graph Neural Networks: Group Equivariant GNNs combine continuous symmetries (lie groups such as SE(3), E(3), SO(3)) with permutation equivariances and has found applications with resounding success on small and large molecules (Anderson et al., 2019; Klicpera et al., 2020; Satorras et al., 2021; Batzner et al., 2021). However, the methods are only able to capture rigid body characteristics of molecules and while capturing the above lie group symmetries is also able to capture input depedent transformations. Employing (Farina & Slade, 2021), would make the neural network excessively invariant and allow the protein to be more flexible (allows unviable conformations) than it truly is. More recently, they have also been applied to learning representations of proteins, which is discussed below.
Monte Carlo and MCMC Methods for Sampling Protein Conformations: There has been a lot of prior work – (Boomsma et al., 2013; Olsson et al., 2013; Antonov et al., 2016; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009) which sample protein conformations by internal coordinate transformations. However, The goals of the existing MCMC methods are significantly different compared to ours. The existing methods define/inherit a distribution over the conformations (majorly based on the properties of the bonds, etc.) and then aim to sample highly probable conformations from this distribution. Our method, is much simpler and only requires that the chain being used is ergodic and is invariant to the actual form of the distribution. We note that the existing Markov chains can seamlessly be used as drop-in replacements to sample conformations as part of our framework as long as the MCMC is ergodic. We consider studying the impact of different MCMC methods (which sample from different distributions) and their influence on the performance in different tasks as important future work.
Neural Networks for Representation Learning of Proteins: Protein representation has gained a lot of attention especially with the tremendous successes of Alphafold and Alphafold2. A variety of neural network architectures including 3D CNNs, LSTM’s and Transformers (treating the protein as a sequence) as well as graph neural networks have been employed to exploit the rigid body symmetries of proteins (Karimi et al., 2019; Pagès et al., 2019; Ingraham et al., 2019; Strokach et al., 2020; Baldassarre et al., 2021; Hermosilla et al., 2021; Jing et al., 2020; 2021). In this work, while we use GNNs and Group Invariant GNNs as a part of the model, we note that we can equally replace them with CNNs, LSTMs, Transformers and other models used for proteins without any change in the underlying theory.
Tree Construction Method for Molecules: Jin et al. (2018), in their work JTVAE, propose
- a procedure to construct tree for molecules. We note that JTVAE is a more general purpose approach for constructing trees from molecules and may not be the best suited for proteins where the backbone atoms are largely observed (experimentally) to be rigid.
Generative Models: Protein conformation generation models (Mansimov et al., 2019; Simm et al., 2020; Ganea et al., 2021; Xu et al., 2021b;a; Shi et al., 2021; Luo et al., 2021) have also recently gained attend where the goal of the model is to predict 3d structure of molecules given input 2d structure - our objective in this work is completely different, but can be used to improve predictions of the aforementioned models. While our model is explicitly not a generative model, our framework can be leveraged towards generative modeling with the help of tools such as noise outsourcing (Chapter 6) (Kallenberg, 2006) and we see this as important future work.
Non-Rigid Body Dynamics: Non-Rigid Body Dynamics of objects has long been studied both by physicists and in the fields of computer vision to understand and capture the geometric deformations of objects (Taylor et al., 2010; Masci et al., 2015). To the best of our knowledge, there exists no prior work in deep learning which captures the non rigidity of protein molecules (which cannot be modeled as Ck manifolds). As important future work, we would like to study the impact of leveraging input dependent conditional invariances for modeling other geometric objects (which are Ck manifolds) as well as images and robotics (e.g. the symmetries for a humanoid is different from that of a tractor).
Unrelated Work with similar names: Non classical and conditional symmetries of solutions to ODE’s, PDE’s have been discussed in the past - these works while they share a similar title, have very little in common as we are not dealing with jet spaces or manifolds (Joseph, 1968; Fushchich & Zhdanov, 1992; Olver & Vorob’ev, 1996).
A.7 DETAILS ABOUT DATASETS AND TASKS
In this section, we describe briefly each of the datasets (and their associated tasks). Information about the splits and license information is provided in Table 2.
PSR: This task utilizes data from the structural models submitted to the Critical Assessment of Structure Prediction competition (CASP - Kryshtafovych et al. (2019) - a blind protein structure prediction competition) to rank protein structures from the experimentally determined structure of the protein. The problem is formulated as a regression task, where we predict the global distance test of each structural model from the experimentally determined structure. As prescribed by the dataset authors, the dataset is split by competition years.
MSP: The goal of this task is to identify mutations that stabilize a protein’s interactions which forms an important step towards the design of new proteins. This task is significant as probing mutations experimentally techniques are labor-intensive. Atom3D (Townshend et al., 2020) derives this dataset by collecting single-point mutations from the SKEMPI database (Jankauskaitė et al., 2019) and model each mutation into the structure to produce a mutated structure. The learning problem is then formulated as a binary classification task where the goal is to predict whether the stability of the complex increases as a result of the mutation. We employ the same splits as suggested by the dataset authors wherein the protein complexes are split such that no protein in the test dataset has more than 30% sequence identity with any protein in the training dataset.
LBA: This task deals with the problem of predicting the strength (affinity) of a candidate drug molecule’s interaction with a target protein. The dataset is constructed using the PDBBind database (Wang et al., 2004; Liu et al., 2015), a curated database containing protein-ligand complexes from the PDB and their corresponding binding strengths (affinities). The task is formulated as a regression task with the goal to predict pK = − log10(K), where K is the binding affinity in Molar units. The splits are created such that no protein in the test dataset has more than 30% sequence identity with any protein in the training dataset.
LEP: The shape of protein impacts whether a protein is in an on or off state which plays an important role in predicting the shape a protein will favor during drug design.This dataset is obtained by curating proteins from several families with both “active" and “inactive" state structures, and model in 527 small molecules with known activating or inactivating function using the program Glide (Friesner
et al., 2004). The task is formulated as a binary classification task where the goal is to predict whether a molecule bound to the structures will be an activator of the protein’s function or not. We use the same split as recommended by the ATOM3D authors.
A.8 EXPERIMENTAL SETUP
The code for the baseline models (GVP-GNN (Jing et al., 2021), E(N) GNN (Satorras et al., 2021) and GNN(GCN) (Townshend et al., 2020; Kipf & Welling, 2016)) were used as provided by the authors (licenses as dictated by the code authors). Our conformer invariance implementation is in PyTorch using Python 3.8. We also leverage networkx to create the directed forests. For all three models we tune the hyperparameters – learning rate (∈ {0.1, 0.01, 0.001, 0.0001}) and mini batch size (∈ {4, 8, 16, 32, 64}). For the E(n) GNN model - since there have been no previous models for the aforementioned protein tasks - we also tune the number of GNN layers (∈ {4, 5, 6, 7}) as a hyper parameter. The experiments were all performed on Tesla V100 GPU’s. For more details refer to the code provided.
A.9 ADDITIONAL RESULTS
In Table 3, we present the results, including additional datasets and tasks, than presented in the main paper. In the LEP task, the proposed addition outperforms the baseline models for the E(n) GNN and the GVP-GNN, but not for the GNN(GCN). The LEP result for the GNN(GCN) is an oddity here, where a GNN which doesn’t incorporate any rigid body transformations outperforms all other models.
In Table 4, we present an ablation study, where we provide a strategy where all the transformations are created from “gold standard" Xp rather than via the MCMC method, i.e., the MCMC is restarted at every epoch during training. From the table, we note that the MCMC method tends to outperform the non MCMC method, which can be attributed to the guarantees it provides to the learning framework.
A.10 CASE AGAINST USING AVERAGE REPRESENTATIONS IN TRAINING AND FOR REQUIRING
The case for the requirement of a single conformer invariant representations – can be seen from the fact that different protein conformations, say X1, X2 of the same protein X , may be seen during
train and test phases. Without conformer invariant representations - this may lead to different representations and therefore different predictions for the same protein (bad).
One may argue, that an approximate conformer invariant neural network can be learned by averaging representations of multiple Monte Carlo conformations during training. However, this is computationally expensive as this would need to back-propagate and update parameters for multiple conformations for every protein in every epoch – bad as this leads to an exponential overhead. On the other hand, our procedure with MCGD, in every epoch uses only one conformation from our Markov Chain and still ensures convergence to optimal parameters. It is important to note that, during inference we still do average representations over multiple conformations (from our Markov chain) to output conformer invariant representations - which again due to MCGD training procedure and the unique steady state of our Markov Chain, ensures confomer invariant representations are achieved – which yield better performance than current state of the art on multiple datasets and tasks. As stated in the main paper, our framework can work well with other Markov chains as well. We chose the proposed chain purely because it is simpler than the existing methods (in that it doesn’t require a distribution over conformations to be assumed) and as such is easier to sample from.
A.11 TRAINING TIME
In Table 5, we present the training time (for 100 epochs) for each of the baseline models as well as well for models with our proposed conformer invariance framework addition. From the table, we note that, on an average the increase in training time over 100 epochs is ≈ 3-4 minutes which is negligible in comparison to the training time (without the proposed framework addition). | 1. What is the focus and contribution of the paper regarding protein property prediction?
2. What are the strengths and weaknesses of the proposed method, particularly in terms of its ability to consider non-rigid transformations?
3. Do you have any concerns about the theoretical analyses and their limitations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions regarding the presentation quality and consistency of notations and acronyms used throughout the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors proposed a "conformer invariant representation learning" for protein property prediction in this submission. The basic idea is to sample viable conformation based on the given input protein and conformation. MCMC for sampling conformer distribution is integrated to representation learning via the existing Monte-Carlo Gradient Descent (MCGD) algorithm. The authors have also reported empirical results for four protein property prediction tasks on the ATOM3D data set.
Strengths And Weaknesses
The authors developed a learning framework that may help protein representation learning considering possible conformation transformations for given input proteins. The procedure of generating and sampling viable conformations is integrated by MCGD to protein representation learning that may help achieve better protein property prediction tasks.
This reviewer has some concerns:
The theoretical analyses for the ergodicity and the convergence to the steady-state conformer distributions seems to be problematic. Based on the description of the procedure, the sampled conformers will be validated by "structure validation tools such as Molprobity". Introducing such procedures may violate the desired ergodicity critical in propositions 3.2 and 3.3. Based on the presentation, this reviewer was not sure whether the proposed directed forest construction procedure will always guarantee that all the viable conformers can indeed be included. If not, the procedure does not really have the guarantee to be "conformer invariant" exactly.
The theoretical analysis appears to be dependent on Definition 2.2 of "rigid backbone protein conformations" while in the main text and supplement, the authors emphasized that the proposed work is for non-rigid transformations. The authors may clearly state the limitations of the proposed procedure.
The presented empirical results are limited with some tasks that the proposed procedure under-performing the baseline without considering "conformer invariance". Also, the proposed procedure appears to share some similarity with augmentation tricks. The authors may want to move the results in the appendix with augmentations into the performance comparison experiments of the main text to check whether the proposed procedure is meaningful or significant. Also, if the authors emphasized that considering more flexible conformers is critical, the performance comparison with other models considering rigid transformations may need to be checked.
Clarity, Quality, Novelty And Reproducibility
The efforts of considering possibly non-rigidity of protein conformations besides other group invariance in protein representation learning can be important. However, there are several major concerns on the presented theoretical results. More comprehensive empirical results may be needed to show the significance of the developed procedure. There are also problems in presentation quality as detailed below:
The authors should pay careful attention to math notations. For example, on page 3, when introducing the math representation of protein conformers, it is not clear whether the node set is for the number of amino acids or atoms in each amino acid. What did the authors many by "m-atom" proteins? If the authors directly modeled m atoms, why V={1, ..., n} before Definition 2.2?
The notations in the last paragraph and Figure 2 on page 4 are not consistent either. For example, The group of actions is denoted as
G
in the last paragraph but
G
in Figure 2 and its caption. Some notations and acronyms in the paper were not defined clearly, which could be improved for better readability. |
ICLR | Title
Conditional Invariances for Conformer Invariant Protein Representations
Abstract
Representation learning for proteins is an emerging area in geometric deep learning. Recent works have factored in both the relational (atomic bonds) and the geometric aspects (atomic positions) of the task, notably bringing together graph neural networks (GNNs) with neural networks for point clouds. The equivariances and invariances to geometric transformations (group actions such as rotations and translations) so far treat large molecules as rigid structures. However, in many important settings, proteins can co-exist as an ensemble of multiple stable conformations. The conformations of a protein, however, cannot be described as input-independent transformations of the protein: Two proteins may require different sets of transformations in order to describe their set of viable conformations. To address this limitation, we introduce the concept of conditional transformations (CT). CT can capture protein structure, while respecting the constraints on dihedral (torsion) angles and steric repulsions between atoms. We then introduce a Markov chain Monte Carlo framework to learn representations that are invariant to these conditional transformations. Our results show that endowing existing baseline models with these conditional transformations helps improve their performance without sacrificing computational efficiency.
1 INTRODUCTION
The literature on geometric deep learning has achieved much success with neural networks that explicitly model equivariances (or invariances) to group transformations (Cohen & Welling, 2016; Maron et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020). Among applications to physical sciences, group equivariant graph neural networks and transformers have specifically found applications to small molecules, as well as large molecules (e.g. proteins) with tremendous success (Klicpera et al., 2020; Anderson et al., 2019; Fuchs et al., 2020; Hutchinson et al., 2021; Satorras et al., 2021; Batzner et al., 2021). Specifically, machine learning for proteins (and 3D macromolecular structures in general) is a rapidly growing application area in geometric deep learning, (Bronstein et al., 2021; Gerken et al., 2021). Traditionally, proteins have been modeled using standard 3D CNNs (Karimi et al., 2019; Pagès et al., 2019), graph neural networks (GNNs) (Kipf & Welling, 2016; Hamilton et al., 2017), and transformers (Vaswani et al., 2017). More recently, several works Jing et al. (2020; 2021); Jumper et al. (2021); Hermosilla et al. (2021) have enriched the above models with neural networks that are equivariant (invariant) to transformations from the Euclidean and rotation groups. While equivariance (invariance) to the transformations in these groups are necessary properties for the model, unfortunately, they are limited to only capture rigid transformations of the input object.
However, these models may not yet account for all invariances of the input pertinent to the downstream task. And the transformations they are invariant to do not depend on the input. For instance, invariance to the Euclidean group restricts the protein representation to act as if the protein were a rigid structure, regardless of the protein under consideration. However, treating proteins as rigid structures may not be optimal for many downstream tasks. And different proteins may have different types of conformations (protein 3D structures with flexible side chains) (Harder et al., 2010; Gainza et al., 2012; Miao & Cao, 2016) . The existing rigid body assumption in protein representations may hurt these methods in datasets & downstream tasks that require protein representations to be invariant to a specific set of protein conformations. For example, for most proteins, regardless of their side chain conformation (as long as viable) under consideration – their protein fold class/ other scalar properties remain the same, their mutation (in)stability remains unaltered, protein ligand binding affinity (apart from changes at the ligand binding site) remain the same, etc.
In light of this limitation of current methods, a question naturally arises: Is it possible to learn conformer-invariant protein representations?
Our Approach: We propose a representation method where the set of symmetries in the task are input-dependent, which we denote conditional invariances. For our specific application, we model every protein as a directed forest, where every amino acid forms a directed tree. We leverage the constructed directed forest of the protein to sample viable protein conformations (where viability is checked with Protein structure validation tools such as Molprobity (Davis et al., 2007; Chen et al., 2010)), which is additionally coupled with a Markov chain Monte Carlo (MCMC) framework to create data augmentations to train the neural network to learn conformer invariant representations.
Our contributions can be summarized as follows: • We provide guiding principles for defining conditional invariant representations as inductive biases,
which serves as a generalization of group invariant neural networks. • We then provide a principled strategy to sample conformations for any given protein from the
support of its protein conformer distribution (which consists of all its viable protein conformations) which captures their true flexibility. Viable conformations respect domain specific constraints such as those on dihedral angles, steric repulsions, among others. This is particularly useful in the scenario when the set of viable transformations is different across different proteins.
• Further, we develop an MCMC-based learning framework which guides the sampling of protein conformations such that the (asymptotic empirical) average representation obtained by all viable conformations of a protein are identical.
• Finally, we perform experimental evaluation of our proposal, where endowing baseline models with our proposed strategy (via an implicit data augmentation scheme) shows noticeable improvements on multiple different classification and regression tasks on proteins.
2 CONDITIONAL INVARIANCES FOR PROTEINS
The symmetry of an object is the set of transformations that leaves the object invariant. The notion of symmetries, expressed through the action of a group on functions defined on some domain, has served as a fundamental concept of geometric deep learning. In this section, we start by defining the concept of input-dependent conditional symmetries and then proceed to specifically tailor these conditional symmetries that are both computationally efficient and useful for representing protein conformations. Some group theoretic preliminaries are presented in the Appendix A.2. Definition 2.1 (Conditionally symmetric-invariant functions). A function f : Ω→ R, is said to be conditionally symmetric-invariant if
f(tx · x) = f(x) ,∀tx ∈ Sx, ∀x ∈ Ω, where Sx is a set of transformations unique to element x and tx : Ω→ Ω.
It is easy to see that conditionally invariant functions are a generalization of group invariant functions where Sx ≡ G ∀x ∈ Ω where G is a group. The above definition is motivated by the fact that representations for proteins may not necessarily be limited to be invariant just to group actions, but to a more general set of protein specific transformations. We detail this motivation next.
A protein molecule is composed of amino acids (say n amino acids), where each atom in the protein belongs to one amino acid ∈ {1, . . . , n}. Excluding the hydrogen atoms, every amino acid contains four atoms known as the backbone atoms (see Figure 1), and other atoms which belong to the side chain. Traditionally, protein structures are solved by X-ray crystallography or cryo-EM and the obtained structure is normally considered a unique 3D conformation of the molecule. Molecules available via the Protein Data Bank (Berman et al., 2000) generally include only unique sets of coordinates to demonstrate the 3D structures, which lead to the unique answer as ‘gold standard’ in structure prediction, such as protein structure prediction competitions - CASP (Kryshtafovych et al., 2019). Under these assumptions protein side-chain conformations are usually assumed to be clustered into rotamers, which are rigid conformations represented by discrete side-chain dihedral angles.
However, works over the past decade (Harder et al., 2010; Gainza et al., 2012; Miao & Cao, 2016) have observed that — while the backbone atoms of all n amino acids in a protein molecule together (i.e. 4n atoms) form a rigid structure, its side chains can exhibit flexible (continuous and discrete) conformations beyond clustered rotamers. The underlying goal of our work is to capture this inherent flexibility of proteins and to ensure different conformers of the same protein get identical representations (More details – Appendix A.10). The above problem of capturing protein conformations is further compounded by the fact that protein conformations are often times unique to the protein - i.e. conformations exhibited by a protein are input (protein) specific. The desiderata, therefore is a model which outputs conformation invariant representations when the conformations (symmetries) exhibited by a protein varies across proteins when access to only a single protein conformer is available.
We denote a protein as a tuple p = (V,Xs,Xp) (data from pdb files) where V is the set of nodes (every atom is a node) in the protein, Xs ∈ Rm×d, d ≥ 1 is the scalar atom feature matrix associated with the atoms, Xp ∈ Rm×3 (where m denotes the number of atoms in the protein) are the positional coordinates associated with the atoms in the protein. Without loss of generality, we number the nodes in V = {1, . . . ,m} following the same ordering of the rows in Xs,Xp.
Definition 2.2 (Rigid Backbone Protein Conformations). For an m-atom protein p with atom positional coordinates Xp ∈ Rm×3 (recall that by definition p contains information about Xp), where Cp ⊂ Rm×3 denotes the set of viable conformations of p, which keeps the positional coordinates associated with the backbone fixed. We use Tp ⊂ Rm×3×3 (where a 3x3 matrix is associated with every single atom) to denote the set of non-isometric transformations of p, which yield the set Cp i.e. ∀cp ∈ Cp, ∃tp ∈ Tp, s.t. for an atom with index i ∈ {1, . . . ,m}, the new position of atom i is Xp[i]tp[i] T = cp[i], Xp[i] ∈ R1×3, tp[i] ∈ R3×3.
Tp forms a set of transformations which acts on the protein atomic coordinates via matrix multiplication. Two elements of Tp (with corresponding matrices of individual atoms and then aggregating for the protein as a whole) may or may not combine via matrix multiplication (as the binary operation) to result in another element of Tp i.e. matrix multiplication is not necessarily closed in Tp. A concrete example of the non group structure of protein conformations is provided in Appendix A.3.
Unfortunately, sampling transformations from Tp to obtain viable protein conformations is an unattainable task, since this would entail first sampling a transformation from Rm×3×3 and then verifying if the transformation is viable (and the vast majority of transformations are not viable). We will address this issue using an MCMC method - which is then combined with MCGD training of the neural network (Sun et al., 2018) to learn conformation invariant protein representations where different viable conformations are obtained via MCMC are used in every batch. Markov Chain Gradient Descent (MCGD) is a variant of Stochastic Gradient Descent (SGD), where the samples are drawn from the trajectory of a Markov Chain. We note that, while there have been prior Monte Carlo and MCMC based techniques for sampling protein conformations (Boomsma et al., 2013; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009), the overarching goals of the existing MCMC methods are significantly different compared to ours. The existing methods define a distribution over the conformations (based on the properties of the bonds, etc.) and then sample highly probable conformations from this distribution. Our method, is much simpler and only requires that the chain being used is ergodic and is invariant to the actual form of the distribution. Therefore, we devise a simple chain whose transitions are easier to sample and don’t require us to compute conditionals of an energy based model. However, we also note that the existing Markov chains can be used as drop-in replacements to sample conformations as part of our framework. Before specifying our MCMC procedure, we specify the invariance requirements for learning protein representations.
A symmetric-invariant function for proteins (per Definition 2.1), should be invariant to (i) transformations which change its atomic coordinates to any of its rigid backbone conformations in Cp (ii) transformations which change the atomic coordinates of all its atoms via rigid body transformations — group actions such as rotations/ translations (iii) a combination of the two above which transforms it into one of its viable conformations and is then further acted upon by rigid body transformations.
3 OBTAINING VIABLE CONFORMATIONS
Before we describe our MCMC procedure to sample atomic coordinates from the set of rigid backbone conformations , we need a way to sample from the local neighborhood of a conformation.
i = SO(3)∀i 6= 1. We note that in protein molecules not all transformations about a
node would be allowed due to steric repulsions between atoms as well as potential overlaps of atoms.
3.1 A SIMPLE PROTEIN CONFORMATION SAMPLING STRATEGY
Akin to Irbäck & Mohanty (2006), to efficiently sample a candidate conformation from Cp, we will follow a multi-step approach: First, we (i) construct a directed tree for every amino acid in the protein to capture the inherent flexibility in protein structures. Then, we (ii) leverage a directed forest of the protein (described next) to transform its atomic coordinates and check for viability.
Part 1. Directed forest construction. Using the input protein molecule, we construct a directed forest using the following three step procedure: 1. Each amino acid in the protein gets its own directed tree. The atoms in the amino acid’s backbone
form the base (root) nodes (i.e., there can be multiple base nodes in every amino acid). This is illustrated as node 1 in Figure 2’s tree. The set of base nodes of all the amino acids in the protein are jointly referred to as the base nodes of the protein (or equivalently of the directed forest).
2. The atoms adjacent to the amino acid’s backbone via a covalent bond become their immediate children in the tree. For example, the node SC1 forms the child of αC in the Figure 1.
3. The other covalent bonds of the children establish the grandchildren of the root, and so on, until all the atoms in the amino acid are a part of the tree. For example, SC2 and SC3 become the children of the node SC1 in the directed tree constructed for Figure 1.
Figure 2 is an illustration of some directed tree constructed with the above procedure, where node 1 is the root node, with nodes 2, 4 as its children and so on. It is important to note that, the above procedure ensures that in the constructed tree, each (non-root) node has a single parent and may have arbitrarily many children. Further, cycles/rings which are present in the molecule are broken on the basis of bond length, i.e., larger bonds are chosen before smaller bonds. Ties between same-length bonds are broken arbitrarily. While multiple directed forests can be created for a given protein since bonds of equal length are broken arbitrarily, in our implementation, given a protein, the tree is fixed because the directed forest for every protein is created exactly once (in the preprocessing step) - where the ties are broken deterministically based on their ordering in the “pdb file".
Next, we shall look at how the atomic coordinates are transformed using the directed tree.
Part 2. Conditional transformations of atomic coordinates. Let Xp be the available input atomic coordinates of the protein. To obtain a new candidate protein conformation (say X ′p ∈ Rm×3), we sample uniformly, one of the directed trees from the forest. Next, we transform the atomic coordinates of the subset of atoms in this directed tree. This set is denoted by A ⊂ V . That is, in the candidate conformation, X ′p[i] = Xp[i], ∀i ∈ V \A — or equivalently tp[i] = I, ∀i ∈ V \A. Starting with the root node, we use breadth first search to traverse the directed tree and update all the descendants of the node currently being considered. For the backbone atoms, we leave Xp[i] = X ′p[i] (or equivalently tp[i] = I), i.e., atomic coordinates are unchanged for backbone atoms. Next, we use pointed sets to describe approximate transformations on a given protein through a directed tree .
Definition 3.1 (Pointed sets). A pointed set is an ordered pair (X,x0), where X is a set and x0 ∈ X is called the basepoint.
For instance, let Mi = {Xp[j] : j ∈ Ni}, where Ni is the set containing i and all its descendants in its directed tree. Then, (Mi,Xp[i]) is a pointed set.
Let the parent of node i be denoted by ◦ and let G(g◦·Xp[◦])i (in practice, SO(3)) be the group from which actions gi ∈ G (g◦·Xp[◦]) i are sampled uniformly — that can transform the positional coordinates of node i and its descendants about its parents in the directed tree, where g◦ ·Xp[◦] is the transformed atomic coordinates of the parent of node i (we use BFS - so a top down approach). In the case of the root node (for example 1 in Figure 2), the actions are just drawn from Groot (or G1 in Figure 2).
The associated transformations of the atomic coordinates of the atoms in Ni can be given by the mapping hi : (Mi,Xp[i])→ (R|Ni|×3, g◦ ·Xp[◦] + gi · (Xp[i]−Xp[◦])) where the coordinates of node j ∈ Ni are updated as: X ′p[j]← g◦ ·Xp[◦] + gi · (Xp[j]−Xp[◦]) (1) If the node j 6= i, its coordinates, i.e., X ′p[j] can be further updated as we traverse the tree. When G
(g◦·Xp[◦]) i = SO(3) ∀i ∈ V , every node in the directed tree can be rotated about its parents as
shown in Figure 3 (Appendix A.4) - which is the side chain of the amino acid shown in Figure 1.
Algorithm 1 Sampling a viable conformation of cp 1: Input: Conformation cp of protein p 2: Obtain the directed forest corresponding to the protein with n directed trees where n is the
number of amino acids in the protein. 3: repeat 4: Sample y ∼ UNIF({1, 2, · · · , n}) and obtain the corresponding directed tree 5: Traverse through directed tree via BFS and update atomic coordinates via Equation (1) to
obtain c′p a candidate conformation of cp— where group actions are always sampled uniformly from SO(3).
6: Check if c′p is a valid protein conformation via protein structure validation tools 7: until c′p is a valid conformation 8: Return: c′p, a valid conformation of cp
However, each candidate X ′p obtained via the above transformation process may not belong to Cp. To that end protein structural validation tools may be used to check the validity of these candidates. Molprobity (Davis et al., 2007; Chen et al., 2010; Williams et al., 2018) is such a tool,which outputs scores corresponding to different metrics (such as number of dihedral (torsion) angles outside the allowed threshold, number of atom-atom clashes arising due to steric repulsions, etc) which can be compared with the originally provided conformation Xp. Note, that while Molprobity is not perfect and we may end up developing models more invariant than just to viable conformations, alternatively (more precise) tools including those which allow for the evaluation of molecular forcefields, can be used to check validity. Additionally, the goal of this work is not to use Molprobity/ other protein structure validation tools as a part of generative models and a more invariant model does not necessarily affect performance on unseen but valid protein test data.
Our algorithm to sample viable conformations is summarized in Algorithm 1. The repeat-loop proposes a conformation in the neighborhood of the current conformation cp which is only accepted when the proposed conformation c′p is a valid conformation. As such it is an acceptance-rejection sampling algorithm. We will see later that the actual sampling probability distribution is not required to be known by our procedure. Our MCMC procedure defined next only requires Algorithm 1 to follow certain conditions which we argue it follows.
3.2 SAMPLING CONFORMERS VIA MCMC
We now describe the MCMC procedure that, starting at any conformer configuration c(0)p ∈ Cp we will, in steady state, sample conformations according to a unique stationary distribution which depends on the protein p, where Cp is the set of all viable conformations of p as defined in Definition 2.2.
To that end we first define the transition kernel of the Markov chain. Algorithm 1 samples a conformation c′p in the neighborhood of a given conformation cp using a directed forest. The neighborhoodN (cp) of the conformation cp is loosely defined as the set of all possible conformations c′p which may result from running Algorithm 1 with cp as input. We denote the probability distribution induced by Algorithm 1 as the transition kernel κ(c′p|cp) which has support N (cp). Definition 3.2 (Conformer Sampling Markov Chain (CSMC) Φp). We define the conformer sampling Markov chain Φp as a time-homogenous Markov chain over the state space Cp with transition kernel
κ as defined above, where Cp is the set of valid conformations associated with given protein p as defined in Definition 2.2.
The consistency of our learning procedure does not depend on the precise definition of κ. However, our procedure relies on the fact that any conformer c′p ∈ Cp can be sampled by κ in a finite number of steps, starting from a conformer cp. We use this fact to show that the chain Φp converges to a unique stationary distribution regardless of the initial conformer. (All proofs in Appendix A.5). Proposition 3.3. Given the CSMC Φp from Definition 3.2 whose transitions are governed by κ which is implicitly defined by Algorithm 1, for any pair of conformers cp, c′p ∈ Cp, there exists τp < ∞, independent of cp, such that P τp Φp (cp, c ′ p) > 0, where P τp Φp is the τp step transition probability.
Next, we show the existence of a unique steady state distribution of our Markov chain. Proposition 3.4. The CSMC Φp defined in Definition 3.2 is uniformly ergodic if Proposition 3.3 is satisfied. Specifically there exists a unique steady state distribution πp such that for all cp ∈ Cp, ‖PnΦp(cp, ·)− πp(·)‖ ≤ C R
n, where C <∞ and R < 1 are constants that depend on Φp, PnΦp is the n step transition probability and ‖ · ‖ is the `1 norm.
Given that we now have the ability to draw samples from a Markov chain which achieves a unique stationary distribution πp on Cp, we shall leverage this next, in our learning framework to learn conformer invariant representations of proteins.
4 LEARNING FRAMEWORK
We shall employ a learning strategy, where we use the viable conformations obtained via the Markov chain to learn a function which outputs conformer invariant representations.
Let D = {(xj , yj)}Nj=1 be the input data (where we have a single conformer for every protein in the dataset). We shall consider a single data point (xj , yj) henceforth, where xj = (V,Xs,Xp) is the input protein. We consider a supervised learning setting where yj is the associated target. We consider both classification and regression tasks.
Let Cj = {cji} be the set of viable conformations of protein xj . We only consider viable conformations which defer only in the atomic coordinates matrix Xp. For a given protein xj , we shall use xji to denote protein xj but with Xp of the original protein modified by cji , and use Sj = {xji} to denote the set of all viable xji i.e. the state space.
Let f : Ω → Rd, d > 0 be any function with learnable parameters θf , (for e.g. any neural network model such as 3D CNN, GNN, Transformer, LSTM, etc.) which takes a protein (in the form (V,Xs,Xp)) as input and outputs protein representations. The function f need not necessarily output conformer invariant representations of the protein xj . Then, a simple way to obtain conformer invariant representations of protein xj (apart from using trivial functions such as a constant function or function independent of Xp) is computing its expected value over all its viable conformations. We shall denote f to denote a function which outputs conformer invariant representations of a protein.
f(xj ; θ f ) = EX∼πj [f(X; θf )] (2)
where πj(·) is the steady state distribution of the Markov chain associated with the protein xj . As such, f(xj ; θf ) = f(x′j ; θ
f ) for any viable protein conformation x′j ∈ Sj . Subsequently, to learn an optimal f (which we denote by f?), we wish to minimize the loss, defined as follows:
L(D; θf , θρ) = 1 N N∑ j=1 L̂(yi, ρ(f(xj ; θ f ); θρ))
= lim k→∞
1
N N∑ j=1 L̂(yi, ρ( 1 k k∑ i=1 f(xij ; θ f ); θρ)) (3)
where L̂ is a convex loss function e.g. cross entropy loss, ρ is some differentiable function with parameters θρ (in practice, an MLP) and lim
k→∞ 1 k
∑k i=1 f(x i j ; θ f ) can be employed as an asymp-
totically unbiased and consistent estimate of f where xij is the i th sample from the Markov chain corresponding to the protein xj
Since Equation (3) is computationally intractable, we employ a surrogate for the loss given by:
J(D; θf , θρ) = lim k→∞
1
N N∑ j=1 1 k k∑ i=1 L̂(yi, ρ(f(x k j ; θ f ); θρ)) (4)
Observe that in Equation (4), the expectation over the conformations is now outside the L̂ and ρ functions while still remaining conformation invariant (however, the optimal parameters corresponding to minimizing J are different from minimizing L). Following (Murphy et al., 2019b;a), we note that, when ρ is the identity function, Equation (4) serves as an upper bound for Equation (3) via Jensen’s inequality. However, learning representations by averaging over multiple conformations (in training) is still computationally expensive (back-prop and high variance issues). Next, we show how MCGD (Sun et al., 2018) can be leveraged to make this tractable (more details in Appendix A.10).
Making the case for simplicity, we next show convergence properties using the full batch setting. Note that is procedure and proofs are equally applicable in the mini-batch setting with minor modifications.
Full-batch training: We consider the data as a single batch B = {(x1, y1), ..., (xj , yj), ..., (xN , yN )} and for a data point (xj , yj), denote the corresponding steady state distributions of the corresponding Markov chains using πj and the state space as Sj .
We denote all learnable parameters of ρ ◦ f as θ ≡ (θf , θρ) and consider θ ∈ Θ ⊆ Rm, m > 0 such that the function ρ ◦ f may be non-convex for some values of θ. We consider a sequence of step sizes (γk)∞k=0 which satisfy:∑
k γk = +∞, ∑ k ln k · γ2k < +∞ (5)
and follow a gradient scheme where parameters are updated as:
θk = θk−1 − γkZk, k > 0 (6)
where Zk = 1N N∑ j=1 ∇θL̂(yj , ρ(f(xk−1j ; θ f k−1); θ ρ k−1)) and x k−1 j is the k − 1 th sample from the Markov chain corresponding to the data point (xj , yj) and employ the Markov chain gradient descent procedure (Sun et al., 2018) where the loss in epoch k, k > 0 of training is obtained by
Ĵk = 1
N N∑ j=1 L̂(yj , ρ(f(x k−1 j ; θ f k−1); θ ρ k−1)).
and minimizes the surrogate loss given in Equation (4) when k is sufficiently large so that the Markov chain has converged to its unique steady state distribution.
Next, we list the set of assumptions we make to ensure convergence of our conformer invariant MCGD procedure. Assumption 4.1. We make the following assumptions:
1. For any θ ∈ Θ and xkj ∈ Sj , the function f is differentiable ∀j 2. supθ∈Θ,xkj∈Sj{||∇θ ρ ◦ f(x k j )||} < +∞ i.e. the gradients are bounded. 3. ∀xkj ∈ Sj , ∀θ1, θ2 ∈ Θ, ||∇θ1 ρ ◦ f(xkj )−∇θ2 ρ ◦ f(xkj )|| < L||θ1 − θ2|| for some L ≥ 0 i.e., the gradients are L−Lipschitz. 4. Exkj∼πj [∇θ ρ ◦ f(x k j )] = ∇θ Exkj∼πj [ρ ◦ f(x k j )]
Next, we formally state the convergence of our optimization procedure to optimal parameters θ? which yield conformer invariant protein representations. Proposition 4.2. Let the step sizes satisfy (5) and the function parameters θ be updated as (6) and Assumption 4.1 hold, then the MCGD optimization enjoys properties of almost sure convergence to the optimal θ.
The use of Markov chain gradient descent procedure to optimize the conformer invariant learning procedure optimizes the objective J in Equation (4), and thus has the following implication on how outputs should be calculated at inference time:
Inference time: We estimate f using an empirical average of f evaluated over conformations (in practice, average final layer [before softmax] representations) visited by the Markov chain :
ˆ fk(xj ; θ f ) = 1
k k∑ i=1 f(x (i) j ; θ f ) , (7)
where x(i)j is the ith state visited by the Markov chain started at xj . Since the Markov chain has a unique steady state distribution, ˆfk(xj ; θ f ) is both asymptotically unbiased and consistent. That is
lim k→∞
ˆ fk(xj ; θ f ) a.s. = f(xj ; θ f ).
5 RESULTS
We evaluate our proposed augmentation procedure on multiple different tasks from the ATOM3D dataset (Townshend et al., 2020). Results are provided in Table 1.
TASKS ON ATOM3D DATASETS
ATOM3D (Townshend et al., 2020) is a unified collection of datasets concerning the 3D structure of proteins. These datasets are specifically designed to provide a benchmark for ML methods which operate on 3D molecular structure, and represent a variety of important structural, functional, and engineering tasks. As part of our evaluation, we perform the (a) Protein Structure Ranking (PSR) (b) Ligand Efficacy Prediction (LEP) (c) Protein Mutation Stability Prediction (MSP) (d) Ligand Binding Affinity (LBA). We describe each of the tasks in detail in Appendix A.7. For all the datasets and tasks we report the same metrics as proposed by the authors in Townshend et al. (2020).
MODEL, BASELINES AND DISCUSSION
We endow the vector gated GVP-GNN model (Jing et al., 2021), a GNN using GCN (Kipf & Welling, 2016) layers and the E(n) GNN Satorras et al. (2021) with conditional transformations from our proposed MCMC method. It is important to note that when the positions of the atoms are altered the protein graph which are inputs to the base encoders are changed appropriately in every epoch. As an ablation study, we also provide a strategy where all the transformations are created from “gold standard" Xp rather than via the MCMC method, i.e., the MCMC is restarted at every epoch during training in the Appendix.
Looking at Table 1, we note that models augmented with our proposed conformer invariance framework, in general, outperforms the baseline models in multiple tasks for which conformer invariance is a requirement. In the LEP task, the proposed addition outperforms the baseline models for the E(n) GNN and the GVP-GNN, but not for the GNN(GCN). The LEP result for the GNN(GCN) is an oddity here, where a GNN which doesn’t explicitly incorporate any atomic positional information outperforms all other models with information about atomic coordinates.
Additional results and ablation studies are presented in Appendix A.9.
COMPUTATIONAL COMPLEXITY & SCALABILITY:
The directed forest for every protein is computed only once as a preprocessing step (a fixed ≈ 5-10 minutes per dataset). At every epoch, for a given protein, we select only one among all its constituent amino acids & perform a sequence of 3× 3 matrix multiplications to obtain a new conformer. If k denotes the number of atoms in the side chain of an amino acid, and height of a directed tree is in the order of log(k), the computational complexity to obtain a conformation is O(k log k) (where k ≈ 15
in avg in our datasets), & this can be run in parallel for all proteins in a mini-batch. Molprobity (run in parallel again, currently runs externally on a web server via a command line call) adds 2s delay per minibatch (we could reduce to milliseconds if we integrated Molprobity into our code). The rejection rate from Molprobity is also very small (less than 1 reject in average across 100 molprobity calls). As an example, training (100 epochs) GVP-GNN - MSP requires ≈ 45 min, and LBA requires ≈ 34 min. In both cases our method requires an additional 4 min overhead. A complete training time table (on a Nvidia Tesla V100 GPU) for all models & datasets can be found in Appendix A.11.6 RELATED WORK Here, we present a high level summary of works related to ours. A more detailed comparison to the works mentioned below and others) are presented in Appendix A.6.
Group Equivariant Neural Networks: Group equivariant and invariant neural networks (Cohen & Welling, 2016; Lenssen et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020; Hutchinson et al., 2021; Fuchs et al., 2020; 2021; Dehmamy et al., 2020) help capture discrete and continuous symmetries of elements (e.g. images) by introducing group theoretic constraints as inductive biases in the neural network. While group equivariant neural network capture global symmetries, local symmetries of manifold spaces can be captured via gauge equivariant networks (Cohen et al., 2019; De Haan et al., 2020). (Lenssen et al., 2018; Gerken et al., 2021; Bronstein et al., 2021) provide a complete review of the theoretical aspects and applications of group equivariant neural networks.
Graph Neural Networks: Graph Neural Networks (Kipf & Welling, 2016; Hamilton et al., 2017; Battaglia et al., 2018; Xu et al., 2018) have gained renewed focus over the past few years and have found applications in recommender systems, biology, chemistry, and many other real world problems which can be formulated as graphs and currently serve as the state of the art in majority of node and graph classification/ regression tasks. Graph neural networks work on the principles of permutation equivariance/ invariances (also groups) and have exploited a message passing framework to learn powerful and expressive representations of nodes/ graphs.
Group Equivariant Graph Neural Networks: These networks combine continuous symmetries (lie groups) with permutation equivariances and has found applications with resounding success on small molecules (Anderson et al., 2019; Klicpera et al., 2020; Satorras et al., 2021; Batzner et al., 2021) which exhibit rigid body characteristics. Employing (Farina & Slade, 2021), would make the neural network excessively invariant and allow the protein to be more flexible (allows unviable conformations) than it truly is. More recently, they have also been applied to learning representations of proteins, which is discussed below.
Monte Carlo and MCMC Methods for Sampling Protein Conformations: There has been a lot of prior work – (Boomsma et al., 2013; Olsson et al., 2013; Antonov et al., 2016; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009) which sample protein conformations by either defining/inheriting a distribution over the conformations (majorly based on the properties of the bonds, etc.) These methods can seamlessly be plugged into our framework as long as the MCMC is ergodic.
Neural Networks for Representation Learning of Proteins: Protein representation has gained a lot of attention especially with the tremendous successes of Alphafold and Alphafold2. A variety of neural network architectures including 3D CNNs, LSTM’s and Transformers (treating the protein as a sequence) as well as graph neural networks have been employed to exploit the rigid body symmetries of proteins (Karimi et al., 2019; Pagès et al., 2019; Ingraham et al., 2019; Strokach et al., 2020; Baldassarre et al., 2021; Hermosilla et al., 2021; Jing et al., 2020; 2021).
Generative Models: Conformation generation models (Mansimov et al., 2019; Simm et al., 2020; Ganea et al., 2021; Xu et al., 2021b;a; Shi et al., 2021; Luo et al., 2021) have also gained attention recently with the goal to predict 3d structure of molecules given their 2d structure - our objective in this work is very different, but can be used to improve predictions of the aforementioned models.
7 CONCLUSIONS
This work addresses the limitations of current protein representation learning methods which are unable to learn conformer invariant representations — and hence unable to capture the inherent flexibility present in protein side chains pertinent to many downstream tasks. To address these, we introduced conditional transformations to capture protein structure, while respecting the restrictions posed by constraints on dihedral (torsion) angles and steric repulsions between atoms. Subsequently, we introduced a Markov chain Monte Carlo based framework to learn representations that are invariant to these conditional transformations.
A APPENDIX
A.1 COMMONLY USED ACRONYMS IN THE PAPER
1. G - group
2. Sx - Set of transformations unique to element x
3. tx - Transformation action which transforms the element x
4. m - Number of atoms in the protein under consideration
5. n - Number of amino acids in the protein under consideration
6. Xs - Matrix of Scalar features associated with atoms in the protein p
7. Xp - Matrix of atomic coordinates associated with atoms in the protein p
8. Cp - Set of conformations of protein p
9. SO(3) - Group of all rotations in 3D space
A.2 GROUP THEORY PRELIMINARIES
Definition A.1 (Group). A group is a setG equipped with a binary operation · : G×G→ G obeying the following axioms:
• for all g1, g2 ∈ G, g1 · g2 ∈ G (closure).
• for all g1, g2, g3 ∈ G, g1 · (g2 · g3) = (g1 · g2) · g3 (associativity).
• there is a unique e ∈ G such that e · g = g · e = g for all g ∈ G (identity).
• for all g ∈ G there exists g−1 ∈ G such that g · g−1 = g−1 · g = e (inverse). Definition A.2 (Group invariant functions). Let G be a group acting on vector space V . We say that a function f : V → R is G-invariant if f(g · x) = f(x) ∀x ∈ V, g ∈ G. Definition A.3 ((Left) Group Action). For a group G with identity element e, and X is a set, a (left) group action α of G on X is a function α : G×X → X that satisfies the following two conditions:
1. Identity: α(e, x) = x, ∀x ∈ X
2. Compatibility: α(g, α(h, x)) = α(gh, x)
We will use a short hand of g · x for α(g, x) when the action being considered is clear from context.
A.3 EXAMPLE DEMONSTRATING NON GROUP STRUCTURE OF A SET OF PROTEIN CONFORMATIONS
Consider X to be a protein with n atoms and m amino acids and let the set of viable conformations of X as {X1, X2, . . . Xi, . . . Xp}. Let X1 be the conformation available in our dataset.
In each step of the transition, only a few atoms (<< n) (atoms in a single amino acid of the entire protein- where the protein is made up of multiple hundreds of amino acids (m) in general) are subjected to an action from the SO(3) group here. The positions of all other atoms (outside that amino acid) in the side chain remain unaltered. So, in the Rn×3×3 matrix – most of the 3x3 entries are the identity matrix of 3 dimension.
Let T i1 ∈ Rn×3×3 where i ∈ {1, 2, . . . , p} be the transformation which yields conformation Xi from X1.
Now consider T 21 (subscript of 1 since we start conformation X1) which takes X1 to X2 and T 31 which takes X1 to X3 where T 2 1 and T 3 1 , do not act on the same amino acid in the protein. Now, however, consider the case where performing T 21 and T 3 1 (or the other way around) sequentially would result in a case where atoms in two different amino acids would overlap (or come too
close to each other causing steric repulsion) - therefore resulting in a non viable conformation i.e. a composition of T 21 and T 3 1 acting on X1, would not be present in the set of all allowed transformations T1 = {T 11 , . . . T i1, . . . T p 1 }. Therefore the set T1 is not closed and doesn’t form a group.
Also for two different conformations X1 and X2, their allowed transformation sets T1, T2 will not be identical (in the above example T 31 6∈ T2). It also easy to construct a case where ⋃p i=1 Ti is not a group and all actions from this group acting on any given Xi will not necessarily result in viable/ valid conformation.
A.4 FLEXIBILITY ALLOWED BY OUR PROPOSED MODEL
Our proposed model allows every node in the directed tree to be rotated about its parents. For example, for the side chain shown in Figure 1 (Main Paper), the allowed flexibility is Figure 3.
A.5 PROOFS OF PROPOSITIONS
First, we restate and prove Proposition 3.3.
Proposition A.4. Given the CSMC Φp from Definition 3.2 whose transitions are governed by κ which is implicitly defined by Algorithm 1 as described above. For any pair of conformers cp, c′p ∈ Cp, there exists τp < ∞, independent of cp, such that P τp Φp (cp, c ′ p) > 0, where P τp Φp
is the τp step transition probability.
Proof. Proof by construction. We prove the proposition by showing that one can construct a path (c (1) p = cp, . . . , c (t) p = c′p) such that c (i) p ∈ Cp and κ(c(i+1)p |c(i)p ) > 0, for all 0 < i < t, and t ≤ Tp. The trivial case where cp ≡ c′p is proved since every group contains the identity element — sampling the identity element for every node in the directed tree yields the same conformation. Since we consider only non backbone transforming conformations, for the non trivial case, a maximum of m− 4n atoms can differ in positions between any two conformations - where m is the number of atoms in the protein and n is the the number of amino acids in the protein n. Both m,n are finite and we are dealing with continuous conformers (and continuous group actions about every node — groups are closed under their associated binary action, and SO(3) is path connected). So we can traverse between conformers (until we reach the desired conformer) sequentially in a finite number of steps, by using the constructed directed forest - selecting a amino acid (which doesn’t violate the viability), fixing the positions of all other amino acids in the protein and rotating the side chain atoms in a single conformer to the final desired state. While this process may result in some side chains
being visited multiple times (due to viability constraints), considering continuous conformers and the SO(3) group (which is path connected) ensures we will never reach a state of deadlock. The second condition is satisfied because every group action has an inverse and we only use transformations from SO(3) for every node in the directed trees.
Next, we restate and and prove Proposition 3.4 Proposition A.5. The CSMC Φp defined in Definition 3.2 is uniformly ergodic if Proposition 3.3 is satisfied. Specifically there exists a unique steady state distribution πp such that for all cp ∈ Cp, ‖PnΦp(cp, ·)− πp(·)‖ ≤ C R
n, where C <∞ and R < 1 are constants that depend on Φp, PnΦp is the n step transition probability and ‖ · ‖ is the `1 norm.
Proof. By Proposition 3.3, Φp satisfies Doeblin’s condition as defined in page 396 of Meyn & Tweedie (2012) which states that for cp, c′p ∈ Cp, P Tp Φp (cp, c ′ p) > for some > 0
1. The uniform ergodicity then holds due to Theorems 16.2.3 and 16.2.1 from Meyn & Tweedie (2012).
Next, we restate and prove Proposition 4.2. We also restate the required assumptions for the proposition. Assumption A.6. We make the following assumptions:
1. For any θ ∈ Θ and xkj ∈ Sj , the function f is differentiable ∀j
2. supθ∈Θ,xkj∈Sj{||∇θ ρ ◦ f(x k j )||} < +∞ i.e. the gradients are bounded.
3. ∀xkj ∈ Sj , ∀θ1, θ2 ∈ Θ, ||∇θ1 ρ ◦ f(xkj )−∇θ2 ρ ◦ f(xkj )|| < L||θ1 − θ2|| for some L ≥ 0 i.e., the gradients are L−Lipschitz.
4. Exkj∼πj [∇θ ρ ◦ f(x k j )] = ∇θ Exkj∼πj [ρ ◦ f(x k j )]
Proposition A.7. Let the step sizes satisfy (5) and the function parameters θ be updated as (6) and Assumption 4.1 hold, then the MCGD optimization enjoys properties of almost sure convergence to the optimal θ.
Proof. Given that each protein has an associated time homogeneous Markov Chain with a unique steady state, independent of other proteins, the set of proteins in a mini-batch also form a Markov chain with a unique steady state. We then leverage Corollary 2 (Page 12) of Sun et al. (2018) along with Proposition 4.2 to ensure almost sure convergence to the optimal θ.
A.6 EXTENDED RELATED WORK
Here, we elaborate on the related works section in Section 6
Group Equivariant and Invariant Neural Networks: Group equivariant and invariant neural networks (Cohen & Welling, 2016; Lenssen et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020; Hutchinson et al., 2021; Fuchs et al., 2020; 2021; Dehmamy et al., 2020) help capture discrete and continuous groups symmetries of elements (e.g. images, point clouds). In this work, we learn representations which are invariant to symmetries which are not just groups, but to input dependent sets of transformations. To the best of our knowledge, our work is the first to consider conditional (input dependent) invariances. Prior works on learning invariant models have leveraged Monte Carlo procedures (Finzi et al., 2020; Murphy et al., 2019b) to learn to be invariant to transformations of the input. Alternatively, our work constructs a Markov Chain with a unique steady state and leverages MCGD (Sun et al., 2018) to make it computationally tractable. Secondly, the aformentioned approaches are limited to being invariant to transformations from any specified Lie group with a surjective exponential map/ permutation group, while our work is not limited to groups. Thirdly, our theory ensures that every example/ object in the dataset can have a different set of input dependent, conditional transformations
1We note that this is a simplified version of the actual statement which is defined on the σ-algebra over Cp denoted by σ(Cp). Our proof holds when c′p ∈ σ(Cp)
- and in fact can be seen as a generalization of the above works. In fact, the ablation study that we perform (Results provided in Appendix A.9 - Table 4) uses a Monte Carlo estimator and our MCMC procedure yields better performance than the Monte Carlo estimator. While group equivariant neural network capture global symmetries, local symmetries of manifold spaces can be captured via gauge equivariant networks (Cohen et al., 2019; De Haan et al., 2020). Gauge symmetries require the manifold to be smooth – which is not the case for proteins. Moreover, different proteins have different sets of viable conformations, which would not be able to captured by standard gauge equivariant neural networks. (Lenssen et al., 2018; Gerken et al., 2021; Bronstein et al., 2021) provide a complete review of the theoretical aspects and a wide variety of applications of group equivariant neural networks. A more comprehensive theoretical analysis of input dependent conditionally invariant neural networks is planned for future work.
Graph Neural Networks: Graph Neural Networks (GNNs) (Kipf & Welling, 2016; Hamilton et al., 2017; Battaglia et al., 2018; Xu et al., 2018) have gained renewed focus over the past few years and have found applications in recommender systems, biology, chemistry, and many other real world problems which can be formulated as graphs and currently serve as the state of the art in majority of node and graph classification/ regression tasks. Graph neural networks work on the principles of permutation equivariance/ invariances (Murphy et al., 2019a) (also groups) and have exploited a message passing framework to learn powerful and expressive representations of nodes/ graphs. Both molecular graphs (both small molecules and macromolecules) as well as graphs based on intra molecular distances have been used with GNNs, to achieve state of the art for many molecular datasets and tasks (Hu et al., 2020; Morris et al., 2020). Here, we leverage three graph based neural networks as baselines for our model.
Group Equivariant Graph Neural Networks: Group Equivariant GNNs combine continuous symmetries (lie groups such as SE(3), E(3), SO(3)) with permutation equivariances and has found applications with resounding success on small and large molecules (Anderson et al., 2019; Klicpera et al., 2020; Satorras et al., 2021; Batzner et al., 2021). However, the methods are only able to capture rigid body characteristics of molecules and while capturing the above lie group symmetries is also able to capture input depedent transformations. Employing (Farina & Slade, 2021), would make the neural network excessively invariant and allow the protein to be more flexible (allows unviable conformations) than it truly is. More recently, they have also been applied to learning representations of proteins, which is discussed below.
Monte Carlo and MCMC Methods for Sampling Protein Conformations: There has been a lot of prior work – (Boomsma et al., 2013; Olsson et al., 2013; Antonov et al., 2016; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009) which sample protein conformations by internal coordinate transformations. However, The goals of the existing MCMC methods are significantly different compared to ours. The existing methods define/inherit a distribution over the conformations (majorly based on the properties of the bonds, etc.) and then aim to sample highly probable conformations from this distribution. Our method, is much simpler and only requires that the chain being used is ergodic and is invariant to the actual form of the distribution. We note that the existing Markov chains can seamlessly be used as drop-in replacements to sample conformations as part of our framework as long as the MCMC is ergodic. We consider studying the impact of different MCMC methods (which sample from different distributions) and their influence on the performance in different tasks as important future work.
Neural Networks for Representation Learning of Proteins: Protein representation has gained a lot of attention especially with the tremendous successes of Alphafold and Alphafold2. A variety of neural network architectures including 3D CNNs, LSTM’s and Transformers (treating the protein as a sequence) as well as graph neural networks have been employed to exploit the rigid body symmetries of proteins (Karimi et al., 2019; Pagès et al., 2019; Ingraham et al., 2019; Strokach et al., 2020; Baldassarre et al., 2021; Hermosilla et al., 2021; Jing et al., 2020; 2021). In this work, while we use GNNs and Group Invariant GNNs as a part of the model, we note that we can equally replace them with CNNs, LSTMs, Transformers and other models used for proteins without any change in the underlying theory.
Tree Construction Method for Molecules: Jin et al. (2018), in their work JTVAE, propose
- a procedure to construct tree for molecules. We note that JTVAE is a more general purpose approach for constructing trees from molecules and may not be the best suited for proteins where the backbone atoms are largely observed (experimentally) to be rigid.
Generative Models: Protein conformation generation models (Mansimov et al., 2019; Simm et al., 2020; Ganea et al., 2021; Xu et al., 2021b;a; Shi et al., 2021; Luo et al., 2021) have also recently gained attend where the goal of the model is to predict 3d structure of molecules given input 2d structure - our objective in this work is completely different, but can be used to improve predictions of the aforementioned models. While our model is explicitly not a generative model, our framework can be leveraged towards generative modeling with the help of tools such as noise outsourcing (Chapter 6) (Kallenberg, 2006) and we see this as important future work.
Non-Rigid Body Dynamics: Non-Rigid Body Dynamics of objects has long been studied both by physicists and in the fields of computer vision to understand and capture the geometric deformations of objects (Taylor et al., 2010; Masci et al., 2015). To the best of our knowledge, there exists no prior work in deep learning which captures the non rigidity of protein molecules (which cannot be modeled as Ck manifolds). As important future work, we would like to study the impact of leveraging input dependent conditional invariances for modeling other geometric objects (which are Ck manifolds) as well as images and robotics (e.g. the symmetries for a humanoid is different from that of a tractor).
Unrelated Work with similar names: Non classical and conditional symmetries of solutions to ODE’s, PDE’s have been discussed in the past - these works while they share a similar title, have very little in common as we are not dealing with jet spaces or manifolds (Joseph, 1968; Fushchich & Zhdanov, 1992; Olver & Vorob’ev, 1996).
A.7 DETAILS ABOUT DATASETS AND TASKS
In this section, we describe briefly each of the datasets (and their associated tasks). Information about the splits and license information is provided in Table 2.
PSR: This task utilizes data from the structural models submitted to the Critical Assessment of Structure Prediction competition (CASP - Kryshtafovych et al. (2019) - a blind protein structure prediction competition) to rank protein structures from the experimentally determined structure of the protein. The problem is formulated as a regression task, where we predict the global distance test of each structural model from the experimentally determined structure. As prescribed by the dataset authors, the dataset is split by competition years.
MSP: The goal of this task is to identify mutations that stabilize a protein’s interactions which forms an important step towards the design of new proteins. This task is significant as probing mutations experimentally techniques are labor-intensive. Atom3D (Townshend et al., 2020) derives this dataset by collecting single-point mutations from the SKEMPI database (Jankauskaitė et al., 2019) and model each mutation into the structure to produce a mutated structure. The learning problem is then formulated as a binary classification task where the goal is to predict whether the stability of the complex increases as a result of the mutation. We employ the same splits as suggested by the dataset authors wherein the protein complexes are split such that no protein in the test dataset has more than 30% sequence identity with any protein in the training dataset.
LBA: This task deals with the problem of predicting the strength (affinity) of a candidate drug molecule’s interaction with a target protein. The dataset is constructed using the PDBBind database (Wang et al., 2004; Liu et al., 2015), a curated database containing protein-ligand complexes from the PDB and their corresponding binding strengths (affinities). The task is formulated as a regression task with the goal to predict pK = − log10(K), where K is the binding affinity in Molar units. The splits are created such that no protein in the test dataset has more than 30% sequence identity with any protein in the training dataset.
LEP: The shape of protein impacts whether a protein is in an on or off state which plays an important role in predicting the shape a protein will favor during drug design.This dataset is obtained by curating proteins from several families with both “active" and “inactive" state structures, and model in 527 small molecules with known activating or inactivating function using the program Glide (Friesner
et al., 2004). The task is formulated as a binary classification task where the goal is to predict whether a molecule bound to the structures will be an activator of the protein’s function or not. We use the same split as recommended by the ATOM3D authors.
A.8 EXPERIMENTAL SETUP
The code for the baseline models (GVP-GNN (Jing et al., 2021), E(N) GNN (Satorras et al., 2021) and GNN(GCN) (Townshend et al., 2020; Kipf & Welling, 2016)) were used as provided by the authors (licenses as dictated by the code authors). Our conformer invariance implementation is in PyTorch using Python 3.8. We also leverage networkx to create the directed forests. For all three models we tune the hyperparameters – learning rate (∈ {0.1, 0.01, 0.001, 0.0001}) and mini batch size (∈ {4, 8, 16, 32, 64}). For the E(n) GNN model - since there have been no previous models for the aforementioned protein tasks - we also tune the number of GNN layers (∈ {4, 5, 6, 7}) as a hyper parameter. The experiments were all performed on Tesla V100 GPU’s. For more details refer to the code provided.
A.9 ADDITIONAL RESULTS
In Table 3, we present the results, including additional datasets and tasks, than presented in the main paper. In the LEP task, the proposed addition outperforms the baseline models for the E(n) GNN and the GVP-GNN, but not for the GNN(GCN). The LEP result for the GNN(GCN) is an oddity here, where a GNN which doesn’t incorporate any rigid body transformations outperforms all other models.
In Table 4, we present an ablation study, where we provide a strategy where all the transformations are created from “gold standard" Xp rather than via the MCMC method, i.e., the MCMC is restarted at every epoch during training. From the table, we note that the MCMC method tends to outperform the non MCMC method, which can be attributed to the guarantees it provides to the learning framework.
A.10 CASE AGAINST USING AVERAGE REPRESENTATIONS IN TRAINING AND FOR REQUIRING
The case for the requirement of a single conformer invariant representations – can be seen from the fact that different protein conformations, say X1, X2 of the same protein X , may be seen during
train and test phases. Without conformer invariant representations - this may lead to different representations and therefore different predictions for the same protein (bad).
One may argue, that an approximate conformer invariant neural network can be learned by averaging representations of multiple Monte Carlo conformations during training. However, this is computationally expensive as this would need to back-propagate and update parameters for multiple conformations for every protein in every epoch – bad as this leads to an exponential overhead. On the other hand, our procedure with MCGD, in every epoch uses only one conformation from our Markov Chain and still ensures convergence to optimal parameters. It is important to note that, during inference we still do average representations over multiple conformations (from our Markov chain) to output conformer invariant representations - which again due to MCGD training procedure and the unique steady state of our Markov Chain, ensures confomer invariant representations are achieved – which yield better performance than current state of the art on multiple datasets and tasks. As stated in the main paper, our framework can work well with other Markov chains as well. We chose the proposed chain purely because it is simpler than the existing methods (in that it doesn’t require a distribution over conformations to be assumed) and as such is easier to sample from.
A.11 TRAINING TIME
In Table 5, we present the training time (for 100 epochs) for each of the baseline models as well as well for models with our proposed conformer invariance framework addition. From the table, we note that, on an average the increase in training time over 100 epochs is ≈ 3-4 minutes which is negligible in comparison to the training time (without the proposed framework addition). | 1. What is the main contribution of the paper on protein modeling?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its motivation and limitation?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What minor comments or suggestions does the reviewer have regarding the paper's presentation and technical details? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The manuscript discusses the need to consider a broader class of invariances in protein modelling than the rotation and translational invariances typically considered. The paper introduces the concept of conditional invariance, defined by transformations that modify the sidechain degrees of freedom of a protein, while leaving the backbone degrees of freedom unaltered. The authors propose using a Markov chain Monte Carlo procedure to generate alternative sidechain conformations during training such that the internal representation in a supervised learning task becomes invariant to the sidechain conformation of the input structure.
Strengths And Weaknesses
The paper does a good job of formally describing the problem and the details of the proposed algorithm. The manuscript seems technically sound. However, I struggle with the motivation for the method. The authors introduce the conditional invariances to sidechain conformations as a fundamental property on par with rotational and translation invariances of the global structure of the molecule, but this equivalence seems shaky. The standard desire for invariance to rotation and translation of a molecular structure has a clear motivation: we wish any result to be independent on the arbitrarily chosen coordinate system of our 3D coordinate system. Or in other words, the protein structure that we care about has 6 degrees of freedom fewer than when we parameterize it using Cartesian coordinates of all its atoms. From a physical perspective, if we are considering intrinsic properties of a protein, for instance its stability, the position and rotation of a protein is fundamentally irrelevant. All physical interactions between atoms in the molecule are independent of the global orientation and translation.
In contrast, the invariance towards sidechain conformations is less clear cut. The authors state that "for most proteins, regardless of their side chain conformation (as long as viable) under consideration – their protein fold class/ other scalar properties remain the same, their mutation (in)stability remains unaltered, protein ligand binding affinity (apart from changes at the ligand binding site) remain the same, etc.". While it is true that the fold class stays the same because it is typically defined only based on the backbone conformation of a protein, the remaining properties (stability, binding affinity), are certainly dependent on side-chain conformations. The keyword here seems to be "as long as viable", which implicitly establishes the equivalence of different structures. But what is deemed "viable" will depend on the physical quantity we wish to predict, and the resolution at which we are modelling the system. The desired level of "invariance" will thus depend on the modelling task. For some tasks, the exact sidechain orientations will not be important - for other tasks, they will be critical.
Rather than consider the set of "equivalent" sidechain conformations, the standard approach in statistical physics would be to consider the distribution over sidechain structures. Any property of interest (e.g. prediction of a downstream task) could then be considered an expectation under this distribution. For instance, if we observe the physical system at some temperature in a fixed volume, the Boltzmann distribution will gives us such a distribution. In the case of this manuscript, the authors are interested in keeping the backbone degrees of freedom fixed - and would thus consider this conditional distribution of sidechain given backbone. The above would be valid for a molecular forcefield parameterizing the energy of a system - but one could presumably make a similar argument for MolProbity: that it induces a probability distribution over valid sidechain conformations, over which you then calculate the expectation. As far as I can see, this would lead to a similar procedure as the one the authors are proposing, but where you use the validity scores of molprobility as an energy rather than considering all sidechains conformations as equivalent. This approach would be easier to defend, because it does not rely on exact equivalences between structural states.
Alternatively, the method could be sold purely as a principled way to do data augmentation, for increased performance in a downstream task.
I would suggest that the authors rewrite "1. introduction" and parts of the "2 Conditional Invariances for Proteins" so that the rotational and translation equivariances of the rigid body case are not presented as equivalent as the conditional invariances that they discuss here. As sketched out above, it would probably be most fruitful if the authors get rid of the concept of "conditional invariance" alltogether, because these invariances are task specific and only approximate - and that they instead present the results in terms of expectations over distributions over sidechain distributions. If the authors stick with "conditional invariance", they should make the limitations of this concept clear - and clarify that the underlying goal is to do meaningful data augmentation in protein systems, and perhaps let that be the driving motivation in the paper. In short, my main problem with the paper is that the conditional invariances are elevated to something fundamental about the physical system - when in reality it is merely a consequence of the fact that certain details are irrelevant for a particular downstream task.
Finally, for completeness, small fluctuations of the backbone structure will have similar properties as the sidechain "invariances" - that they will in many cases be "equivalent" with respect to a downstream prediction task. It would make sense that the authors mention that they make the simplifying choice of considering only the sidechain conformers in this paper, ignoring this other source of fluctuation.
Clarity, Quality, Novelty And Reproducibility
Clarity
The paper is well written with clear figures to support the story.
Quality
The manuscript seems technically sound. The method is supported with proofs in the appendix.
The result section should include a baseline where the MCMC is done offline, rather than as part of the training procedure. If a pool of samples could be generated prior to training and used as standard data augmentation (e.g. sampling randomly from the pool in each epoch), it would simplify the training procedure considerably. The Additional Result section has a "gold standard" baseline, which might actually be the baseline I request, but is not very clearly explained. The authors should describe how the "gold standard" is created - how long the offline MCMC has been run to create the gold standard (is it fully converged?), and what they mean when they write that the "MCMC is restarted in every epoch during training". If the method is indeed better than the offline method, it would be helpful if the authors could describe explicitly why they think that their approach is better than simple data augmentation (currently, it says "which can be attributed to the guarantees it provides to the learning framework" - which was not clear to me. Which guarantees are these?)
Reproducibility
As far as I could see, no code was shared as part of this submission, and there was also no Code Availability statement in the manuscript. The authors should state if they intend to share the code.
Minor comments
Caption, Figure 1. "Just the alpha carbon and the side chain atom". "atom" -> "atoms"
"Traditionally, protein structures are solved by X-ray crystallography or cyro-EM". "cyro-EM" -> "cryo-EM"
"and the obtained structure is normally considered a unique 3D conformation of the molecule." It is unclear what "unique" means here". Do you mean "representative"?
Proposition 3.3 and Proposition 3.4. The authors should state if these results are any different from the general results known for MCMC simulations of proteins. I would think that it is well known that transitions kernels such as the one described would lead to a Markov chain with a unique steady state distribution. If this is indeed the case, the authors should state that they are simply restating known results. If not, they should state how their result is different.
"Then, a simple way to obtain conformer invariant representations of protein x_j (apart from using trivial functions such as a constant function or function independent of X_v)" What is X_v here? |
ICLR | Title
Conditional Invariances for Conformer Invariant Protein Representations
Abstract
Representation learning for proteins is an emerging area in geometric deep learning. Recent works have factored in both the relational (atomic bonds) and the geometric aspects (atomic positions) of the task, notably bringing together graph neural networks (GNNs) with neural networks for point clouds. The equivariances and invariances to geometric transformations (group actions such as rotations and translations) so far treat large molecules as rigid structures. However, in many important settings, proteins can co-exist as an ensemble of multiple stable conformations. The conformations of a protein, however, cannot be described as input-independent transformations of the protein: Two proteins may require different sets of transformations in order to describe their set of viable conformations. To address this limitation, we introduce the concept of conditional transformations (CT). CT can capture protein structure, while respecting the constraints on dihedral (torsion) angles and steric repulsions between atoms. We then introduce a Markov chain Monte Carlo framework to learn representations that are invariant to these conditional transformations. Our results show that endowing existing baseline models with these conditional transformations helps improve their performance without sacrificing computational efficiency.
1 INTRODUCTION
The literature on geometric deep learning has achieved much success with neural networks that explicitly model equivariances (or invariances) to group transformations (Cohen & Welling, 2016; Maron et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020). Among applications to physical sciences, group equivariant graph neural networks and transformers have specifically found applications to small molecules, as well as large molecules (e.g. proteins) with tremendous success (Klicpera et al., 2020; Anderson et al., 2019; Fuchs et al., 2020; Hutchinson et al., 2021; Satorras et al., 2021; Batzner et al., 2021). Specifically, machine learning for proteins (and 3D macromolecular structures in general) is a rapidly growing application area in geometric deep learning, (Bronstein et al., 2021; Gerken et al., 2021). Traditionally, proteins have been modeled using standard 3D CNNs (Karimi et al., 2019; Pagès et al., 2019), graph neural networks (GNNs) (Kipf & Welling, 2016; Hamilton et al., 2017), and transformers (Vaswani et al., 2017). More recently, several works Jing et al. (2020; 2021); Jumper et al. (2021); Hermosilla et al. (2021) have enriched the above models with neural networks that are equivariant (invariant) to transformations from the Euclidean and rotation groups. While equivariance (invariance) to the transformations in these groups are necessary properties for the model, unfortunately, they are limited to only capture rigid transformations of the input object.
However, these models may not yet account for all invariances of the input pertinent to the downstream task. And the transformations they are invariant to do not depend on the input. For instance, invariance to the Euclidean group restricts the protein representation to act as if the protein were a rigid structure, regardless of the protein under consideration. However, treating proteins as rigid structures may not be optimal for many downstream tasks. And different proteins may have different types of conformations (protein 3D structures with flexible side chains) (Harder et al., 2010; Gainza et al., 2012; Miao & Cao, 2016) . The existing rigid body assumption in protein representations may hurt these methods in datasets & downstream tasks that require protein representations to be invariant to a specific set of protein conformations. For example, for most proteins, regardless of their side chain conformation (as long as viable) under consideration – their protein fold class/ other scalar properties remain the same, their mutation (in)stability remains unaltered, protein ligand binding affinity (apart from changes at the ligand binding site) remain the same, etc.
In light of this limitation of current methods, a question naturally arises: Is it possible to learn conformer-invariant protein representations?
Our Approach: We propose a representation method where the set of symmetries in the task are input-dependent, which we denote conditional invariances. For our specific application, we model every protein as a directed forest, where every amino acid forms a directed tree. We leverage the constructed directed forest of the protein to sample viable protein conformations (where viability is checked with Protein structure validation tools such as Molprobity (Davis et al., 2007; Chen et al., 2010)), which is additionally coupled with a Markov chain Monte Carlo (MCMC) framework to create data augmentations to train the neural network to learn conformer invariant representations.
Our contributions can be summarized as follows: • We provide guiding principles for defining conditional invariant representations as inductive biases,
which serves as a generalization of group invariant neural networks. • We then provide a principled strategy to sample conformations for any given protein from the
support of its protein conformer distribution (which consists of all its viable protein conformations) which captures their true flexibility. Viable conformations respect domain specific constraints such as those on dihedral angles, steric repulsions, among others. This is particularly useful in the scenario when the set of viable transformations is different across different proteins.
• Further, we develop an MCMC-based learning framework which guides the sampling of protein conformations such that the (asymptotic empirical) average representation obtained by all viable conformations of a protein are identical.
• Finally, we perform experimental evaluation of our proposal, where endowing baseline models with our proposed strategy (via an implicit data augmentation scheme) shows noticeable improvements on multiple different classification and regression tasks on proteins.
2 CONDITIONAL INVARIANCES FOR PROTEINS
The symmetry of an object is the set of transformations that leaves the object invariant. The notion of symmetries, expressed through the action of a group on functions defined on some domain, has served as a fundamental concept of geometric deep learning. In this section, we start by defining the concept of input-dependent conditional symmetries and then proceed to specifically tailor these conditional symmetries that are both computationally efficient and useful for representing protein conformations. Some group theoretic preliminaries are presented in the Appendix A.2. Definition 2.1 (Conditionally symmetric-invariant functions). A function f : Ω→ R, is said to be conditionally symmetric-invariant if
f(tx · x) = f(x) ,∀tx ∈ Sx, ∀x ∈ Ω, where Sx is a set of transformations unique to element x and tx : Ω→ Ω.
It is easy to see that conditionally invariant functions are a generalization of group invariant functions where Sx ≡ G ∀x ∈ Ω where G is a group. The above definition is motivated by the fact that representations for proteins may not necessarily be limited to be invariant just to group actions, but to a more general set of protein specific transformations. We detail this motivation next.
A protein molecule is composed of amino acids (say n amino acids), where each atom in the protein belongs to one amino acid ∈ {1, . . . , n}. Excluding the hydrogen atoms, every amino acid contains four atoms known as the backbone atoms (see Figure 1), and other atoms which belong to the side chain. Traditionally, protein structures are solved by X-ray crystallography or cryo-EM and the obtained structure is normally considered a unique 3D conformation of the molecule. Molecules available via the Protein Data Bank (Berman et al., 2000) generally include only unique sets of coordinates to demonstrate the 3D structures, which lead to the unique answer as ‘gold standard’ in structure prediction, such as protein structure prediction competitions - CASP (Kryshtafovych et al., 2019). Under these assumptions protein side-chain conformations are usually assumed to be clustered into rotamers, which are rigid conformations represented by discrete side-chain dihedral angles.
However, works over the past decade (Harder et al., 2010; Gainza et al., 2012; Miao & Cao, 2016) have observed that — while the backbone atoms of all n amino acids in a protein molecule together (i.e. 4n atoms) form a rigid structure, its side chains can exhibit flexible (continuous and discrete) conformations beyond clustered rotamers. The underlying goal of our work is to capture this inherent flexibility of proteins and to ensure different conformers of the same protein get identical representations (More details – Appendix A.10). The above problem of capturing protein conformations is further compounded by the fact that protein conformations are often times unique to the protein - i.e. conformations exhibited by a protein are input (protein) specific. The desiderata, therefore is a model which outputs conformation invariant representations when the conformations (symmetries) exhibited by a protein varies across proteins when access to only a single protein conformer is available.
We denote a protein as a tuple p = (V,Xs,Xp) (data from pdb files) where V is the set of nodes (every atom is a node) in the protein, Xs ∈ Rm×d, d ≥ 1 is the scalar atom feature matrix associated with the atoms, Xp ∈ Rm×3 (where m denotes the number of atoms in the protein) are the positional coordinates associated with the atoms in the protein. Without loss of generality, we number the nodes in V = {1, . . . ,m} following the same ordering of the rows in Xs,Xp.
Definition 2.2 (Rigid Backbone Protein Conformations). For an m-atom protein p with atom positional coordinates Xp ∈ Rm×3 (recall that by definition p contains information about Xp), where Cp ⊂ Rm×3 denotes the set of viable conformations of p, which keeps the positional coordinates associated with the backbone fixed. We use Tp ⊂ Rm×3×3 (where a 3x3 matrix is associated with every single atom) to denote the set of non-isometric transformations of p, which yield the set Cp i.e. ∀cp ∈ Cp, ∃tp ∈ Tp, s.t. for an atom with index i ∈ {1, . . . ,m}, the new position of atom i is Xp[i]tp[i] T = cp[i], Xp[i] ∈ R1×3, tp[i] ∈ R3×3.
Tp forms a set of transformations which acts on the protein atomic coordinates via matrix multiplication. Two elements of Tp (with corresponding matrices of individual atoms and then aggregating for the protein as a whole) may or may not combine via matrix multiplication (as the binary operation) to result in another element of Tp i.e. matrix multiplication is not necessarily closed in Tp. A concrete example of the non group structure of protein conformations is provided in Appendix A.3.
Unfortunately, sampling transformations from Tp to obtain viable protein conformations is an unattainable task, since this would entail first sampling a transformation from Rm×3×3 and then verifying if the transformation is viable (and the vast majority of transformations are not viable). We will address this issue using an MCMC method - which is then combined with MCGD training of the neural network (Sun et al., 2018) to learn conformation invariant protein representations where different viable conformations are obtained via MCMC are used in every batch. Markov Chain Gradient Descent (MCGD) is a variant of Stochastic Gradient Descent (SGD), where the samples are drawn from the trajectory of a Markov Chain. We note that, while there have been prior Monte Carlo and MCMC based techniques for sampling protein conformations (Boomsma et al., 2013; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009), the overarching goals of the existing MCMC methods are significantly different compared to ours. The existing methods define a distribution over the conformations (based on the properties of the bonds, etc.) and then sample highly probable conformations from this distribution. Our method, is much simpler and only requires that the chain being used is ergodic and is invariant to the actual form of the distribution. Therefore, we devise a simple chain whose transitions are easier to sample and don’t require us to compute conditionals of an energy based model. However, we also note that the existing Markov chains can be used as drop-in replacements to sample conformations as part of our framework. Before specifying our MCMC procedure, we specify the invariance requirements for learning protein representations.
A symmetric-invariant function for proteins (per Definition 2.1), should be invariant to (i) transformations which change its atomic coordinates to any of its rigid backbone conformations in Cp (ii) transformations which change the atomic coordinates of all its atoms via rigid body transformations — group actions such as rotations/ translations (iii) a combination of the two above which transforms it into one of its viable conformations and is then further acted upon by rigid body transformations.
3 OBTAINING VIABLE CONFORMATIONS
Before we describe our MCMC procedure to sample atomic coordinates from the set of rigid backbone conformations , we need a way to sample from the local neighborhood of a conformation.
i = SO(3)∀i 6= 1. We note that in protein molecules not all transformations about a
node would be allowed due to steric repulsions between atoms as well as potential overlaps of atoms.
3.1 A SIMPLE PROTEIN CONFORMATION SAMPLING STRATEGY
Akin to Irbäck & Mohanty (2006), to efficiently sample a candidate conformation from Cp, we will follow a multi-step approach: First, we (i) construct a directed tree for every amino acid in the protein to capture the inherent flexibility in protein structures. Then, we (ii) leverage a directed forest of the protein (described next) to transform its atomic coordinates and check for viability.
Part 1. Directed forest construction. Using the input protein molecule, we construct a directed forest using the following three step procedure: 1. Each amino acid in the protein gets its own directed tree. The atoms in the amino acid’s backbone
form the base (root) nodes (i.e., there can be multiple base nodes in every amino acid). This is illustrated as node 1 in Figure 2’s tree. The set of base nodes of all the amino acids in the protein are jointly referred to as the base nodes of the protein (or equivalently of the directed forest).
2. The atoms adjacent to the amino acid’s backbone via a covalent bond become their immediate children in the tree. For example, the node SC1 forms the child of αC in the Figure 1.
3. The other covalent bonds of the children establish the grandchildren of the root, and so on, until all the atoms in the amino acid are a part of the tree. For example, SC2 and SC3 become the children of the node SC1 in the directed tree constructed for Figure 1.
Figure 2 is an illustration of some directed tree constructed with the above procedure, where node 1 is the root node, with nodes 2, 4 as its children and so on. It is important to note that, the above procedure ensures that in the constructed tree, each (non-root) node has a single parent and may have arbitrarily many children. Further, cycles/rings which are present in the molecule are broken on the basis of bond length, i.e., larger bonds are chosen before smaller bonds. Ties between same-length bonds are broken arbitrarily. While multiple directed forests can be created for a given protein since bonds of equal length are broken arbitrarily, in our implementation, given a protein, the tree is fixed because the directed forest for every protein is created exactly once (in the preprocessing step) - where the ties are broken deterministically based on their ordering in the “pdb file".
Next, we shall look at how the atomic coordinates are transformed using the directed tree.
Part 2. Conditional transformations of atomic coordinates. Let Xp be the available input atomic coordinates of the protein. To obtain a new candidate protein conformation (say X ′p ∈ Rm×3), we sample uniformly, one of the directed trees from the forest. Next, we transform the atomic coordinates of the subset of atoms in this directed tree. This set is denoted by A ⊂ V . That is, in the candidate conformation, X ′p[i] = Xp[i], ∀i ∈ V \A — or equivalently tp[i] = I, ∀i ∈ V \A. Starting with the root node, we use breadth first search to traverse the directed tree and update all the descendants of the node currently being considered. For the backbone atoms, we leave Xp[i] = X ′p[i] (or equivalently tp[i] = I), i.e., atomic coordinates are unchanged for backbone atoms. Next, we use pointed sets to describe approximate transformations on a given protein through a directed tree .
Definition 3.1 (Pointed sets). A pointed set is an ordered pair (X,x0), where X is a set and x0 ∈ X is called the basepoint.
For instance, let Mi = {Xp[j] : j ∈ Ni}, where Ni is the set containing i and all its descendants in its directed tree. Then, (Mi,Xp[i]) is a pointed set.
Let the parent of node i be denoted by ◦ and let G(g◦·Xp[◦])i (in practice, SO(3)) be the group from which actions gi ∈ G (g◦·Xp[◦]) i are sampled uniformly — that can transform the positional coordinates of node i and its descendants about its parents in the directed tree, where g◦ ·Xp[◦] is the transformed atomic coordinates of the parent of node i (we use BFS - so a top down approach). In the case of the root node (for example 1 in Figure 2), the actions are just drawn from Groot (or G1 in Figure 2).
The associated transformations of the atomic coordinates of the atoms in Ni can be given by the mapping hi : (Mi,Xp[i])→ (R|Ni|×3, g◦ ·Xp[◦] + gi · (Xp[i]−Xp[◦])) where the coordinates of node j ∈ Ni are updated as: X ′p[j]← g◦ ·Xp[◦] + gi · (Xp[j]−Xp[◦]) (1) If the node j 6= i, its coordinates, i.e., X ′p[j] can be further updated as we traverse the tree. When G
(g◦·Xp[◦]) i = SO(3) ∀i ∈ V , every node in the directed tree can be rotated about its parents as
shown in Figure 3 (Appendix A.4) - which is the side chain of the amino acid shown in Figure 1.
Algorithm 1 Sampling a viable conformation of cp 1: Input: Conformation cp of protein p 2: Obtain the directed forest corresponding to the protein with n directed trees where n is the
number of amino acids in the protein. 3: repeat 4: Sample y ∼ UNIF({1, 2, · · · , n}) and obtain the corresponding directed tree 5: Traverse through directed tree via BFS and update atomic coordinates via Equation (1) to
obtain c′p a candidate conformation of cp— where group actions are always sampled uniformly from SO(3).
6: Check if c′p is a valid protein conformation via protein structure validation tools 7: until c′p is a valid conformation 8: Return: c′p, a valid conformation of cp
However, each candidate X ′p obtained via the above transformation process may not belong to Cp. To that end protein structural validation tools may be used to check the validity of these candidates. Molprobity (Davis et al., 2007; Chen et al., 2010; Williams et al., 2018) is such a tool,which outputs scores corresponding to different metrics (such as number of dihedral (torsion) angles outside the allowed threshold, number of atom-atom clashes arising due to steric repulsions, etc) which can be compared with the originally provided conformation Xp. Note, that while Molprobity is not perfect and we may end up developing models more invariant than just to viable conformations, alternatively (more precise) tools including those which allow for the evaluation of molecular forcefields, can be used to check validity. Additionally, the goal of this work is not to use Molprobity/ other protein structure validation tools as a part of generative models and a more invariant model does not necessarily affect performance on unseen but valid protein test data.
Our algorithm to sample viable conformations is summarized in Algorithm 1. The repeat-loop proposes a conformation in the neighborhood of the current conformation cp which is only accepted when the proposed conformation c′p is a valid conformation. As such it is an acceptance-rejection sampling algorithm. We will see later that the actual sampling probability distribution is not required to be known by our procedure. Our MCMC procedure defined next only requires Algorithm 1 to follow certain conditions which we argue it follows.
3.2 SAMPLING CONFORMERS VIA MCMC
We now describe the MCMC procedure that, starting at any conformer configuration c(0)p ∈ Cp we will, in steady state, sample conformations according to a unique stationary distribution which depends on the protein p, where Cp is the set of all viable conformations of p as defined in Definition 2.2.
To that end we first define the transition kernel of the Markov chain. Algorithm 1 samples a conformation c′p in the neighborhood of a given conformation cp using a directed forest. The neighborhoodN (cp) of the conformation cp is loosely defined as the set of all possible conformations c′p which may result from running Algorithm 1 with cp as input. We denote the probability distribution induced by Algorithm 1 as the transition kernel κ(c′p|cp) which has support N (cp). Definition 3.2 (Conformer Sampling Markov Chain (CSMC) Φp). We define the conformer sampling Markov chain Φp as a time-homogenous Markov chain over the state space Cp with transition kernel
κ as defined above, where Cp is the set of valid conformations associated with given protein p as defined in Definition 2.2.
The consistency of our learning procedure does not depend on the precise definition of κ. However, our procedure relies on the fact that any conformer c′p ∈ Cp can be sampled by κ in a finite number of steps, starting from a conformer cp. We use this fact to show that the chain Φp converges to a unique stationary distribution regardless of the initial conformer. (All proofs in Appendix A.5). Proposition 3.3. Given the CSMC Φp from Definition 3.2 whose transitions are governed by κ which is implicitly defined by Algorithm 1, for any pair of conformers cp, c′p ∈ Cp, there exists τp < ∞, independent of cp, such that P τp Φp (cp, c ′ p) > 0, where P τp Φp is the τp step transition probability.
Next, we show the existence of a unique steady state distribution of our Markov chain. Proposition 3.4. The CSMC Φp defined in Definition 3.2 is uniformly ergodic if Proposition 3.3 is satisfied. Specifically there exists a unique steady state distribution πp such that for all cp ∈ Cp, ‖PnΦp(cp, ·)− πp(·)‖ ≤ C R
n, where C <∞ and R < 1 are constants that depend on Φp, PnΦp is the n step transition probability and ‖ · ‖ is the `1 norm.
Given that we now have the ability to draw samples from a Markov chain which achieves a unique stationary distribution πp on Cp, we shall leverage this next, in our learning framework to learn conformer invariant representations of proteins.
4 LEARNING FRAMEWORK
We shall employ a learning strategy, where we use the viable conformations obtained via the Markov chain to learn a function which outputs conformer invariant representations.
Let D = {(xj , yj)}Nj=1 be the input data (where we have a single conformer for every protein in the dataset). We shall consider a single data point (xj , yj) henceforth, where xj = (V,Xs,Xp) is the input protein. We consider a supervised learning setting where yj is the associated target. We consider both classification and regression tasks.
Let Cj = {cji} be the set of viable conformations of protein xj . We only consider viable conformations which defer only in the atomic coordinates matrix Xp. For a given protein xj , we shall use xji to denote protein xj but with Xp of the original protein modified by cji , and use Sj = {xji} to denote the set of all viable xji i.e. the state space.
Let f : Ω → Rd, d > 0 be any function with learnable parameters θf , (for e.g. any neural network model such as 3D CNN, GNN, Transformer, LSTM, etc.) which takes a protein (in the form (V,Xs,Xp)) as input and outputs protein representations. The function f need not necessarily output conformer invariant representations of the protein xj . Then, a simple way to obtain conformer invariant representations of protein xj (apart from using trivial functions such as a constant function or function independent of Xp) is computing its expected value over all its viable conformations. We shall denote f to denote a function which outputs conformer invariant representations of a protein.
f(xj ; θ f ) = EX∼πj [f(X; θf )] (2)
where πj(·) is the steady state distribution of the Markov chain associated with the protein xj . As such, f(xj ; θf ) = f(x′j ; θ
f ) for any viable protein conformation x′j ∈ Sj . Subsequently, to learn an optimal f (which we denote by f?), we wish to minimize the loss, defined as follows:
L(D; θf , θρ) = 1 N N∑ j=1 L̂(yi, ρ(f(xj ; θ f ); θρ))
= lim k→∞
1
N N∑ j=1 L̂(yi, ρ( 1 k k∑ i=1 f(xij ; θ f ); θρ)) (3)
where L̂ is a convex loss function e.g. cross entropy loss, ρ is some differentiable function with parameters θρ (in practice, an MLP) and lim
k→∞ 1 k
∑k i=1 f(x i j ; θ f ) can be employed as an asymp-
totically unbiased and consistent estimate of f where xij is the i th sample from the Markov chain corresponding to the protein xj
Since Equation (3) is computationally intractable, we employ a surrogate for the loss given by:
J(D; θf , θρ) = lim k→∞
1
N N∑ j=1 1 k k∑ i=1 L̂(yi, ρ(f(x k j ; θ f ); θρ)) (4)
Observe that in Equation (4), the expectation over the conformations is now outside the L̂ and ρ functions while still remaining conformation invariant (however, the optimal parameters corresponding to minimizing J are different from minimizing L). Following (Murphy et al., 2019b;a), we note that, when ρ is the identity function, Equation (4) serves as an upper bound for Equation (3) via Jensen’s inequality. However, learning representations by averaging over multiple conformations (in training) is still computationally expensive (back-prop and high variance issues). Next, we show how MCGD (Sun et al., 2018) can be leveraged to make this tractable (more details in Appendix A.10).
Making the case for simplicity, we next show convergence properties using the full batch setting. Note that is procedure and proofs are equally applicable in the mini-batch setting with minor modifications.
Full-batch training: We consider the data as a single batch B = {(x1, y1), ..., (xj , yj), ..., (xN , yN )} and for a data point (xj , yj), denote the corresponding steady state distributions of the corresponding Markov chains using πj and the state space as Sj .
We denote all learnable parameters of ρ ◦ f as θ ≡ (θf , θρ) and consider θ ∈ Θ ⊆ Rm, m > 0 such that the function ρ ◦ f may be non-convex for some values of θ. We consider a sequence of step sizes (γk)∞k=0 which satisfy:∑
k γk = +∞, ∑ k ln k · γ2k < +∞ (5)
and follow a gradient scheme where parameters are updated as:
θk = θk−1 − γkZk, k > 0 (6)
where Zk = 1N N∑ j=1 ∇θL̂(yj , ρ(f(xk−1j ; θ f k−1); θ ρ k−1)) and x k−1 j is the k − 1 th sample from the Markov chain corresponding to the data point (xj , yj) and employ the Markov chain gradient descent procedure (Sun et al., 2018) where the loss in epoch k, k > 0 of training is obtained by
Ĵk = 1
N N∑ j=1 L̂(yj , ρ(f(x k−1 j ; θ f k−1); θ ρ k−1)).
and minimizes the surrogate loss given in Equation (4) when k is sufficiently large so that the Markov chain has converged to its unique steady state distribution.
Next, we list the set of assumptions we make to ensure convergence of our conformer invariant MCGD procedure. Assumption 4.1. We make the following assumptions:
1. For any θ ∈ Θ and xkj ∈ Sj , the function f is differentiable ∀j 2. supθ∈Θ,xkj∈Sj{||∇θ ρ ◦ f(x k j )||} < +∞ i.e. the gradients are bounded. 3. ∀xkj ∈ Sj , ∀θ1, θ2 ∈ Θ, ||∇θ1 ρ ◦ f(xkj )−∇θ2 ρ ◦ f(xkj )|| < L||θ1 − θ2|| for some L ≥ 0 i.e., the gradients are L−Lipschitz. 4. Exkj∼πj [∇θ ρ ◦ f(x k j )] = ∇θ Exkj∼πj [ρ ◦ f(x k j )]
Next, we formally state the convergence of our optimization procedure to optimal parameters θ? which yield conformer invariant protein representations. Proposition 4.2. Let the step sizes satisfy (5) and the function parameters θ be updated as (6) and Assumption 4.1 hold, then the MCGD optimization enjoys properties of almost sure convergence to the optimal θ.
The use of Markov chain gradient descent procedure to optimize the conformer invariant learning procedure optimizes the objective J in Equation (4), and thus has the following implication on how outputs should be calculated at inference time:
Inference time: We estimate f using an empirical average of f evaluated over conformations (in practice, average final layer [before softmax] representations) visited by the Markov chain :
ˆ fk(xj ; θ f ) = 1
k k∑ i=1 f(x (i) j ; θ f ) , (7)
where x(i)j is the ith state visited by the Markov chain started at xj . Since the Markov chain has a unique steady state distribution, ˆfk(xj ; θ f ) is both asymptotically unbiased and consistent. That is
lim k→∞
ˆ fk(xj ; θ f ) a.s. = f(xj ; θ f ).
5 RESULTS
We evaluate our proposed augmentation procedure on multiple different tasks from the ATOM3D dataset (Townshend et al., 2020). Results are provided in Table 1.
TASKS ON ATOM3D DATASETS
ATOM3D (Townshend et al., 2020) is a unified collection of datasets concerning the 3D structure of proteins. These datasets are specifically designed to provide a benchmark for ML methods which operate on 3D molecular structure, and represent a variety of important structural, functional, and engineering tasks. As part of our evaluation, we perform the (a) Protein Structure Ranking (PSR) (b) Ligand Efficacy Prediction (LEP) (c) Protein Mutation Stability Prediction (MSP) (d) Ligand Binding Affinity (LBA). We describe each of the tasks in detail in Appendix A.7. For all the datasets and tasks we report the same metrics as proposed by the authors in Townshend et al. (2020).
MODEL, BASELINES AND DISCUSSION
We endow the vector gated GVP-GNN model (Jing et al., 2021), a GNN using GCN (Kipf & Welling, 2016) layers and the E(n) GNN Satorras et al. (2021) with conditional transformations from our proposed MCMC method. It is important to note that when the positions of the atoms are altered the protein graph which are inputs to the base encoders are changed appropriately in every epoch. As an ablation study, we also provide a strategy where all the transformations are created from “gold standard" Xp rather than via the MCMC method, i.e., the MCMC is restarted at every epoch during training in the Appendix.
Looking at Table 1, we note that models augmented with our proposed conformer invariance framework, in general, outperforms the baseline models in multiple tasks for which conformer invariance is a requirement. In the LEP task, the proposed addition outperforms the baseline models for the E(n) GNN and the GVP-GNN, but not for the GNN(GCN). The LEP result for the GNN(GCN) is an oddity here, where a GNN which doesn’t explicitly incorporate any atomic positional information outperforms all other models with information about atomic coordinates.
Additional results and ablation studies are presented in Appendix A.9.
COMPUTATIONAL COMPLEXITY & SCALABILITY:
The directed forest for every protein is computed only once as a preprocessing step (a fixed ≈ 5-10 minutes per dataset). At every epoch, for a given protein, we select only one among all its constituent amino acids & perform a sequence of 3× 3 matrix multiplications to obtain a new conformer. If k denotes the number of atoms in the side chain of an amino acid, and height of a directed tree is in the order of log(k), the computational complexity to obtain a conformation is O(k log k) (where k ≈ 15
in avg in our datasets), & this can be run in parallel for all proteins in a mini-batch. Molprobity (run in parallel again, currently runs externally on a web server via a command line call) adds 2s delay per minibatch (we could reduce to milliseconds if we integrated Molprobity into our code). The rejection rate from Molprobity is also very small (less than 1 reject in average across 100 molprobity calls). As an example, training (100 epochs) GVP-GNN - MSP requires ≈ 45 min, and LBA requires ≈ 34 min. In both cases our method requires an additional 4 min overhead. A complete training time table (on a Nvidia Tesla V100 GPU) for all models & datasets can be found in Appendix A.11.6 RELATED WORK Here, we present a high level summary of works related to ours. A more detailed comparison to the works mentioned below and others) are presented in Appendix A.6.
Group Equivariant Neural Networks: Group equivariant and invariant neural networks (Cohen & Welling, 2016; Lenssen et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020; Hutchinson et al., 2021; Fuchs et al., 2020; 2021; Dehmamy et al., 2020) help capture discrete and continuous symmetries of elements (e.g. images) by introducing group theoretic constraints as inductive biases in the neural network. While group equivariant neural network capture global symmetries, local symmetries of manifold spaces can be captured via gauge equivariant networks (Cohen et al., 2019; De Haan et al., 2020). (Lenssen et al., 2018; Gerken et al., 2021; Bronstein et al., 2021) provide a complete review of the theoretical aspects and applications of group equivariant neural networks.
Graph Neural Networks: Graph Neural Networks (Kipf & Welling, 2016; Hamilton et al., 2017; Battaglia et al., 2018; Xu et al., 2018) have gained renewed focus over the past few years and have found applications in recommender systems, biology, chemistry, and many other real world problems which can be formulated as graphs and currently serve as the state of the art in majority of node and graph classification/ regression tasks. Graph neural networks work on the principles of permutation equivariance/ invariances (also groups) and have exploited a message passing framework to learn powerful and expressive representations of nodes/ graphs.
Group Equivariant Graph Neural Networks: These networks combine continuous symmetries (lie groups) with permutation equivariances and has found applications with resounding success on small molecules (Anderson et al., 2019; Klicpera et al., 2020; Satorras et al., 2021; Batzner et al., 2021) which exhibit rigid body characteristics. Employing (Farina & Slade, 2021), would make the neural network excessively invariant and allow the protein to be more flexible (allows unviable conformations) than it truly is. More recently, they have also been applied to learning representations of proteins, which is discussed below.
Monte Carlo and MCMC Methods for Sampling Protein Conformations: There has been a lot of prior work – (Boomsma et al., 2013; Olsson et al., 2013; Antonov et al., 2016; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009) which sample protein conformations by either defining/inheriting a distribution over the conformations (majorly based on the properties of the bonds, etc.) These methods can seamlessly be plugged into our framework as long as the MCMC is ergodic.
Neural Networks for Representation Learning of Proteins: Protein representation has gained a lot of attention especially with the tremendous successes of Alphafold and Alphafold2. A variety of neural network architectures including 3D CNNs, LSTM’s and Transformers (treating the protein as a sequence) as well as graph neural networks have been employed to exploit the rigid body symmetries of proteins (Karimi et al., 2019; Pagès et al., 2019; Ingraham et al., 2019; Strokach et al., 2020; Baldassarre et al., 2021; Hermosilla et al., 2021; Jing et al., 2020; 2021).
Generative Models: Conformation generation models (Mansimov et al., 2019; Simm et al., 2020; Ganea et al., 2021; Xu et al., 2021b;a; Shi et al., 2021; Luo et al., 2021) have also gained attention recently with the goal to predict 3d structure of molecules given their 2d structure - our objective in this work is very different, but can be used to improve predictions of the aforementioned models.
7 CONCLUSIONS
This work addresses the limitations of current protein representation learning methods which are unable to learn conformer invariant representations — and hence unable to capture the inherent flexibility present in protein side chains pertinent to many downstream tasks. To address these, we introduced conditional transformations to capture protein structure, while respecting the restrictions posed by constraints on dihedral (torsion) angles and steric repulsions between atoms. Subsequently, we introduced a Markov chain Monte Carlo based framework to learn representations that are invariant to these conditional transformations.
A APPENDIX
A.1 COMMONLY USED ACRONYMS IN THE PAPER
1. G - group
2. Sx - Set of transformations unique to element x
3. tx - Transformation action which transforms the element x
4. m - Number of atoms in the protein under consideration
5. n - Number of amino acids in the protein under consideration
6. Xs - Matrix of Scalar features associated with atoms in the protein p
7. Xp - Matrix of atomic coordinates associated with atoms in the protein p
8. Cp - Set of conformations of protein p
9. SO(3) - Group of all rotations in 3D space
A.2 GROUP THEORY PRELIMINARIES
Definition A.1 (Group). A group is a setG equipped with a binary operation · : G×G→ G obeying the following axioms:
• for all g1, g2 ∈ G, g1 · g2 ∈ G (closure).
• for all g1, g2, g3 ∈ G, g1 · (g2 · g3) = (g1 · g2) · g3 (associativity).
• there is a unique e ∈ G such that e · g = g · e = g for all g ∈ G (identity).
• for all g ∈ G there exists g−1 ∈ G such that g · g−1 = g−1 · g = e (inverse). Definition A.2 (Group invariant functions). Let G be a group acting on vector space V . We say that a function f : V → R is G-invariant if f(g · x) = f(x) ∀x ∈ V, g ∈ G. Definition A.3 ((Left) Group Action). For a group G with identity element e, and X is a set, a (left) group action α of G on X is a function α : G×X → X that satisfies the following two conditions:
1. Identity: α(e, x) = x, ∀x ∈ X
2. Compatibility: α(g, α(h, x)) = α(gh, x)
We will use a short hand of g · x for α(g, x) when the action being considered is clear from context.
A.3 EXAMPLE DEMONSTRATING NON GROUP STRUCTURE OF A SET OF PROTEIN CONFORMATIONS
Consider X to be a protein with n atoms and m amino acids and let the set of viable conformations of X as {X1, X2, . . . Xi, . . . Xp}. Let X1 be the conformation available in our dataset.
In each step of the transition, only a few atoms (<< n) (atoms in a single amino acid of the entire protein- where the protein is made up of multiple hundreds of amino acids (m) in general) are subjected to an action from the SO(3) group here. The positions of all other atoms (outside that amino acid) in the side chain remain unaltered. So, in the Rn×3×3 matrix – most of the 3x3 entries are the identity matrix of 3 dimension.
Let T i1 ∈ Rn×3×3 where i ∈ {1, 2, . . . , p} be the transformation which yields conformation Xi from X1.
Now consider T 21 (subscript of 1 since we start conformation X1) which takes X1 to X2 and T 31 which takes X1 to X3 where T 2 1 and T 3 1 , do not act on the same amino acid in the protein. Now, however, consider the case where performing T 21 and T 3 1 (or the other way around) sequentially would result in a case where atoms in two different amino acids would overlap (or come too
close to each other causing steric repulsion) - therefore resulting in a non viable conformation i.e. a composition of T 21 and T 3 1 acting on X1, would not be present in the set of all allowed transformations T1 = {T 11 , . . . T i1, . . . T p 1 }. Therefore the set T1 is not closed and doesn’t form a group.
Also for two different conformations X1 and X2, their allowed transformation sets T1, T2 will not be identical (in the above example T 31 6∈ T2). It also easy to construct a case where ⋃p i=1 Ti is not a group and all actions from this group acting on any given Xi will not necessarily result in viable/ valid conformation.
A.4 FLEXIBILITY ALLOWED BY OUR PROPOSED MODEL
Our proposed model allows every node in the directed tree to be rotated about its parents. For example, for the side chain shown in Figure 1 (Main Paper), the allowed flexibility is Figure 3.
A.5 PROOFS OF PROPOSITIONS
First, we restate and prove Proposition 3.3.
Proposition A.4. Given the CSMC Φp from Definition 3.2 whose transitions are governed by κ which is implicitly defined by Algorithm 1 as described above. For any pair of conformers cp, c′p ∈ Cp, there exists τp < ∞, independent of cp, such that P τp Φp (cp, c ′ p) > 0, where P τp Φp
is the τp step transition probability.
Proof. Proof by construction. We prove the proposition by showing that one can construct a path (c (1) p = cp, . . . , c (t) p = c′p) such that c (i) p ∈ Cp and κ(c(i+1)p |c(i)p ) > 0, for all 0 < i < t, and t ≤ Tp. The trivial case where cp ≡ c′p is proved since every group contains the identity element — sampling the identity element for every node in the directed tree yields the same conformation. Since we consider only non backbone transforming conformations, for the non trivial case, a maximum of m− 4n atoms can differ in positions between any two conformations - where m is the number of atoms in the protein and n is the the number of amino acids in the protein n. Both m,n are finite and we are dealing with continuous conformers (and continuous group actions about every node — groups are closed under their associated binary action, and SO(3) is path connected). So we can traverse between conformers (until we reach the desired conformer) sequentially in a finite number of steps, by using the constructed directed forest - selecting a amino acid (which doesn’t violate the viability), fixing the positions of all other amino acids in the protein and rotating the side chain atoms in a single conformer to the final desired state. While this process may result in some side chains
being visited multiple times (due to viability constraints), considering continuous conformers and the SO(3) group (which is path connected) ensures we will never reach a state of deadlock. The second condition is satisfied because every group action has an inverse and we only use transformations from SO(3) for every node in the directed trees.
Next, we restate and and prove Proposition 3.4 Proposition A.5. The CSMC Φp defined in Definition 3.2 is uniformly ergodic if Proposition 3.3 is satisfied. Specifically there exists a unique steady state distribution πp such that for all cp ∈ Cp, ‖PnΦp(cp, ·)− πp(·)‖ ≤ C R
n, where C <∞ and R < 1 are constants that depend on Φp, PnΦp is the n step transition probability and ‖ · ‖ is the `1 norm.
Proof. By Proposition 3.3, Φp satisfies Doeblin’s condition as defined in page 396 of Meyn & Tweedie (2012) which states that for cp, c′p ∈ Cp, P Tp Φp (cp, c ′ p) > for some > 0
1. The uniform ergodicity then holds due to Theorems 16.2.3 and 16.2.1 from Meyn & Tweedie (2012).
Next, we restate and prove Proposition 4.2. We also restate the required assumptions for the proposition. Assumption A.6. We make the following assumptions:
1. For any θ ∈ Θ and xkj ∈ Sj , the function f is differentiable ∀j
2. supθ∈Θ,xkj∈Sj{||∇θ ρ ◦ f(x k j )||} < +∞ i.e. the gradients are bounded.
3. ∀xkj ∈ Sj , ∀θ1, θ2 ∈ Θ, ||∇θ1 ρ ◦ f(xkj )−∇θ2 ρ ◦ f(xkj )|| < L||θ1 − θ2|| for some L ≥ 0 i.e., the gradients are L−Lipschitz.
4. Exkj∼πj [∇θ ρ ◦ f(x k j )] = ∇θ Exkj∼πj [ρ ◦ f(x k j )]
Proposition A.7. Let the step sizes satisfy (5) and the function parameters θ be updated as (6) and Assumption 4.1 hold, then the MCGD optimization enjoys properties of almost sure convergence to the optimal θ.
Proof. Given that each protein has an associated time homogeneous Markov Chain with a unique steady state, independent of other proteins, the set of proteins in a mini-batch also form a Markov chain with a unique steady state. We then leverage Corollary 2 (Page 12) of Sun et al. (2018) along with Proposition 4.2 to ensure almost sure convergence to the optimal θ.
A.6 EXTENDED RELATED WORK
Here, we elaborate on the related works section in Section 6
Group Equivariant and Invariant Neural Networks: Group equivariant and invariant neural networks (Cohen & Welling, 2016; Lenssen et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020; Hutchinson et al., 2021; Fuchs et al., 2020; 2021; Dehmamy et al., 2020) help capture discrete and continuous groups symmetries of elements (e.g. images, point clouds). In this work, we learn representations which are invariant to symmetries which are not just groups, but to input dependent sets of transformations. To the best of our knowledge, our work is the first to consider conditional (input dependent) invariances. Prior works on learning invariant models have leveraged Monte Carlo procedures (Finzi et al., 2020; Murphy et al., 2019b) to learn to be invariant to transformations of the input. Alternatively, our work constructs a Markov Chain with a unique steady state and leverages MCGD (Sun et al., 2018) to make it computationally tractable. Secondly, the aformentioned approaches are limited to being invariant to transformations from any specified Lie group with a surjective exponential map/ permutation group, while our work is not limited to groups. Thirdly, our theory ensures that every example/ object in the dataset can have a different set of input dependent, conditional transformations
1We note that this is a simplified version of the actual statement which is defined on the σ-algebra over Cp denoted by σ(Cp). Our proof holds when c′p ∈ σ(Cp)
- and in fact can be seen as a generalization of the above works. In fact, the ablation study that we perform (Results provided in Appendix A.9 - Table 4) uses a Monte Carlo estimator and our MCMC procedure yields better performance than the Monte Carlo estimator. While group equivariant neural network capture global symmetries, local symmetries of manifold spaces can be captured via gauge equivariant networks (Cohen et al., 2019; De Haan et al., 2020). Gauge symmetries require the manifold to be smooth – which is not the case for proteins. Moreover, different proteins have different sets of viable conformations, which would not be able to captured by standard gauge equivariant neural networks. (Lenssen et al., 2018; Gerken et al., 2021; Bronstein et al., 2021) provide a complete review of the theoretical aspects and a wide variety of applications of group equivariant neural networks. A more comprehensive theoretical analysis of input dependent conditionally invariant neural networks is planned for future work.
Graph Neural Networks: Graph Neural Networks (GNNs) (Kipf & Welling, 2016; Hamilton et al., 2017; Battaglia et al., 2018; Xu et al., 2018) have gained renewed focus over the past few years and have found applications in recommender systems, biology, chemistry, and many other real world problems which can be formulated as graphs and currently serve as the state of the art in majority of node and graph classification/ regression tasks. Graph neural networks work on the principles of permutation equivariance/ invariances (Murphy et al., 2019a) (also groups) and have exploited a message passing framework to learn powerful and expressive representations of nodes/ graphs. Both molecular graphs (both small molecules and macromolecules) as well as graphs based on intra molecular distances have been used with GNNs, to achieve state of the art for many molecular datasets and tasks (Hu et al., 2020; Morris et al., 2020). Here, we leverage three graph based neural networks as baselines for our model.
Group Equivariant Graph Neural Networks: Group Equivariant GNNs combine continuous symmetries (lie groups such as SE(3), E(3), SO(3)) with permutation equivariances and has found applications with resounding success on small and large molecules (Anderson et al., 2019; Klicpera et al., 2020; Satorras et al., 2021; Batzner et al., 2021). However, the methods are only able to capture rigid body characteristics of molecules and while capturing the above lie group symmetries is also able to capture input depedent transformations. Employing (Farina & Slade, 2021), would make the neural network excessively invariant and allow the protein to be more flexible (allows unviable conformations) than it truly is. More recently, they have also been applied to learning representations of proteins, which is discussed below.
Monte Carlo and MCMC Methods for Sampling Protein Conformations: There has been a lot of prior work – (Boomsma et al., 2013; Olsson et al., 2013; Antonov et al., 2016; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009) which sample protein conformations by internal coordinate transformations. However, The goals of the existing MCMC methods are significantly different compared to ours. The existing methods define/inherit a distribution over the conformations (majorly based on the properties of the bonds, etc.) and then aim to sample highly probable conformations from this distribution. Our method, is much simpler and only requires that the chain being used is ergodic and is invariant to the actual form of the distribution. We note that the existing Markov chains can seamlessly be used as drop-in replacements to sample conformations as part of our framework as long as the MCMC is ergodic. We consider studying the impact of different MCMC methods (which sample from different distributions) and their influence on the performance in different tasks as important future work.
Neural Networks for Representation Learning of Proteins: Protein representation has gained a lot of attention especially with the tremendous successes of Alphafold and Alphafold2. A variety of neural network architectures including 3D CNNs, LSTM’s and Transformers (treating the protein as a sequence) as well as graph neural networks have been employed to exploit the rigid body symmetries of proteins (Karimi et al., 2019; Pagès et al., 2019; Ingraham et al., 2019; Strokach et al., 2020; Baldassarre et al., 2021; Hermosilla et al., 2021; Jing et al., 2020; 2021). In this work, while we use GNNs and Group Invariant GNNs as a part of the model, we note that we can equally replace them with CNNs, LSTMs, Transformers and other models used for proteins without any change in the underlying theory.
Tree Construction Method for Molecules: Jin et al. (2018), in their work JTVAE, propose
- a procedure to construct tree for molecules. We note that JTVAE is a more general purpose approach for constructing trees from molecules and may not be the best suited for proteins where the backbone atoms are largely observed (experimentally) to be rigid.
Generative Models: Protein conformation generation models (Mansimov et al., 2019; Simm et al., 2020; Ganea et al., 2021; Xu et al., 2021b;a; Shi et al., 2021; Luo et al., 2021) have also recently gained attend where the goal of the model is to predict 3d structure of molecules given input 2d structure - our objective in this work is completely different, but can be used to improve predictions of the aforementioned models. While our model is explicitly not a generative model, our framework can be leveraged towards generative modeling with the help of tools such as noise outsourcing (Chapter 6) (Kallenberg, 2006) and we see this as important future work.
Non-Rigid Body Dynamics: Non-Rigid Body Dynamics of objects has long been studied both by physicists and in the fields of computer vision to understand and capture the geometric deformations of objects (Taylor et al., 2010; Masci et al., 2015). To the best of our knowledge, there exists no prior work in deep learning which captures the non rigidity of protein molecules (which cannot be modeled as Ck manifolds). As important future work, we would like to study the impact of leveraging input dependent conditional invariances for modeling other geometric objects (which are Ck manifolds) as well as images and robotics (e.g. the symmetries for a humanoid is different from that of a tractor).
Unrelated Work with similar names: Non classical and conditional symmetries of solutions to ODE’s, PDE’s have been discussed in the past - these works while they share a similar title, have very little in common as we are not dealing with jet spaces or manifolds (Joseph, 1968; Fushchich & Zhdanov, 1992; Olver & Vorob’ev, 1996).
A.7 DETAILS ABOUT DATASETS AND TASKS
In this section, we describe briefly each of the datasets (and their associated tasks). Information about the splits and license information is provided in Table 2.
PSR: This task utilizes data from the structural models submitted to the Critical Assessment of Structure Prediction competition (CASP - Kryshtafovych et al. (2019) - a blind protein structure prediction competition) to rank protein structures from the experimentally determined structure of the protein. The problem is formulated as a regression task, where we predict the global distance test of each structural model from the experimentally determined structure. As prescribed by the dataset authors, the dataset is split by competition years.
MSP: The goal of this task is to identify mutations that stabilize a protein’s interactions which forms an important step towards the design of new proteins. This task is significant as probing mutations experimentally techniques are labor-intensive. Atom3D (Townshend et al., 2020) derives this dataset by collecting single-point mutations from the SKEMPI database (Jankauskaitė et al., 2019) and model each mutation into the structure to produce a mutated structure. The learning problem is then formulated as a binary classification task where the goal is to predict whether the stability of the complex increases as a result of the mutation. We employ the same splits as suggested by the dataset authors wherein the protein complexes are split such that no protein in the test dataset has more than 30% sequence identity with any protein in the training dataset.
LBA: This task deals with the problem of predicting the strength (affinity) of a candidate drug molecule’s interaction with a target protein. The dataset is constructed using the PDBBind database (Wang et al., 2004; Liu et al., 2015), a curated database containing protein-ligand complexes from the PDB and their corresponding binding strengths (affinities). The task is formulated as a regression task with the goal to predict pK = − log10(K), where K is the binding affinity in Molar units. The splits are created such that no protein in the test dataset has more than 30% sequence identity with any protein in the training dataset.
LEP: The shape of protein impacts whether a protein is in an on or off state which plays an important role in predicting the shape a protein will favor during drug design.This dataset is obtained by curating proteins from several families with both “active" and “inactive" state structures, and model in 527 small molecules with known activating or inactivating function using the program Glide (Friesner
et al., 2004). The task is formulated as a binary classification task where the goal is to predict whether a molecule bound to the structures will be an activator of the protein’s function or not. We use the same split as recommended by the ATOM3D authors.
A.8 EXPERIMENTAL SETUP
The code for the baseline models (GVP-GNN (Jing et al., 2021), E(N) GNN (Satorras et al., 2021) and GNN(GCN) (Townshend et al., 2020; Kipf & Welling, 2016)) were used as provided by the authors (licenses as dictated by the code authors). Our conformer invariance implementation is in PyTorch using Python 3.8. We also leverage networkx to create the directed forests. For all three models we tune the hyperparameters – learning rate (∈ {0.1, 0.01, 0.001, 0.0001}) and mini batch size (∈ {4, 8, 16, 32, 64}). For the E(n) GNN model - since there have been no previous models for the aforementioned protein tasks - we also tune the number of GNN layers (∈ {4, 5, 6, 7}) as a hyper parameter. The experiments were all performed on Tesla V100 GPU’s. For more details refer to the code provided.
A.9 ADDITIONAL RESULTS
In Table 3, we present the results, including additional datasets and tasks, than presented in the main paper. In the LEP task, the proposed addition outperforms the baseline models for the E(n) GNN and the GVP-GNN, but not for the GNN(GCN). The LEP result for the GNN(GCN) is an oddity here, where a GNN which doesn’t incorporate any rigid body transformations outperforms all other models.
In Table 4, we present an ablation study, where we provide a strategy where all the transformations are created from “gold standard" Xp rather than via the MCMC method, i.e., the MCMC is restarted at every epoch during training. From the table, we note that the MCMC method tends to outperform the non MCMC method, which can be attributed to the guarantees it provides to the learning framework.
A.10 CASE AGAINST USING AVERAGE REPRESENTATIONS IN TRAINING AND FOR REQUIRING
The case for the requirement of a single conformer invariant representations – can be seen from the fact that different protein conformations, say X1, X2 of the same protein X , may be seen during
train and test phases. Without conformer invariant representations - this may lead to different representations and therefore different predictions for the same protein (bad).
One may argue, that an approximate conformer invariant neural network can be learned by averaging representations of multiple Monte Carlo conformations during training. However, this is computationally expensive as this would need to back-propagate and update parameters for multiple conformations for every protein in every epoch – bad as this leads to an exponential overhead. On the other hand, our procedure with MCGD, in every epoch uses only one conformation from our Markov Chain and still ensures convergence to optimal parameters. It is important to note that, during inference we still do average representations over multiple conformations (from our Markov chain) to output conformer invariant representations - which again due to MCGD training procedure and the unique steady state of our Markov Chain, ensures confomer invariant representations are achieved – which yield better performance than current state of the art on multiple datasets and tasks. As stated in the main paper, our framework can work well with other Markov chains as well. We chose the proposed chain purely because it is simpler than the existing methods (in that it doesn’t require a distribution over conformations to be assumed) and as such is easier to sample from.
A.11 TRAINING TIME
In Table 5, we present the training time (for 100 epochs) for each of the baseline models as well as well for models with our proposed conformer invariance framework addition. From the table, we note that, on an average the increase in training time over 100 epochs is ≈ 3-4 minutes which is negligible in comparison to the training time (without the proposed framework addition). | 1. What is the focus of the paper regarding protein representation learning?
2. What are the strengths of the proposed method, particularly in terms of data augmentation?
3. What are the weaknesses of the paper, especially regarding assumptions and validations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for additional baselines or comparisons to other works in protein representation learning? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Summary of the work The paper studies protein representation learning from the 3D structure. Recent works have shown that although the back-bond of protein 3D structure is fixed, the side chains associated with each back-bone atom are flexible and may correspond to different conformations.
The authors proposed a method to augment the current 3D structures with randomly sampled conformations from the gold 3D structure. The samples are used as additional data for both training and inference in downstream learning tasks where the representation of the proteins is approximated by the average of the representation of the conformations.
They show that this simple data augmentation approach improves the performance of supervised downstream tasks that only relies on 3D CNN. 3D GCN GNN to transform the gold conformations in the Atomic3D benchmark dataset.
Even though methods for sampling confirmation using MCMC exist in the literature, a novelty of the work relies on the new sampling approach that constructs for each back-bond atom a directed tree by breaking the loops in the side chains associated with the back-bond atom. Sampling was done by the sampling point set of the nodes in the trees starting from the root nodes down to the leaves, where the later point set is conditionally dependent on the parent's conformations. An MCMC approach is used to sample conformations of the side chains from these point sets.
Strengths And Weaknesses
Strength This is an interesting and novel idea on conformation sampling.
Good experimental results compared to the baseline that does not leverage augmented data.
Weaknesses
I see a lot of strong assumptions without thorough validation and supported evidence both theoretically and empirically
Step 4 in Algorithm 1: trees were sampled independently. Is there any evidence that the conformation of side chains of different amino acids are independent of each other?
Representing an amino acid as a tree requires breaking circles (if exists) at a random point, different breaking points result in different tree structures. The current conformation sampling approach assumes conditional independence between a node and all of its grand-parent given the parent nodes. This assumption seems a strong assumption that may lead to ill-approximation of the true conformation distribution. Is there any way to validate that the distribution of the sampled data fits the right distribution of conformation?
Since evidence about the approximation of the true conformation distribution is not provided, to demonstrate that the proposed conformation sampling approach based on the atom forest is a more effective data augmentation approach than other simple data augmentation approaches, the following simple data augmentation should be considered as a baseline to compare to:
For each back-bond atom, keep the back-bond atom fixed, and randomly transform all the side-chain nodes associated with that back-bond atom using a random rotation matrix.
This baseline will validate that the data augmentation process proposed in the paper is an effective data augmentation approach compared to this simple data augmentation baseline.
A lot of relevant existing works in protein representation using amino acid sequences are ignored. When protein 3D structure data is scarce because of expensive acquisition, a lot of work on protein representation learning relies on linear amino acid sequences. Although this representation of protein does not tell much about the 3D conformations of protein explicitly, the availability of a large database of an amino acid sequence is very relevant for self-supervised learning to be trained on. The discussion of these works in the related work and comparison to these works should be conducted carefully. I would suggest the authors compare to at least the following baseline representation trained on large amino acid sequence databases, https://github.com/facebookresearch/esm:
ESM-1b
ESM-MSA-1b
Clarity, Quality, Novelty And Reproducibility
The idea in the paper is original. No source code was included in the submission, it is hard to reproduce the results. The paper was well written. However, it makes too many assumptions about the distribution of the conformation without clear support both theoretical and empirical evidence (see my comments on weaknesses). A large number of related works in protein representation learning using amino acid sequences were ignored. |
ICLR | Title
Conditional Invariances for Conformer Invariant Protein Representations
Abstract
Representation learning for proteins is an emerging area in geometric deep learning. Recent works have factored in both the relational (atomic bonds) and the geometric aspects (atomic positions) of the task, notably bringing together graph neural networks (GNNs) with neural networks for point clouds. The equivariances and invariances to geometric transformations (group actions such as rotations and translations) so far treat large molecules as rigid structures. However, in many important settings, proteins can co-exist as an ensemble of multiple stable conformations. The conformations of a protein, however, cannot be described as input-independent transformations of the protein: Two proteins may require different sets of transformations in order to describe their set of viable conformations. To address this limitation, we introduce the concept of conditional transformations (CT). CT can capture protein structure, while respecting the constraints on dihedral (torsion) angles and steric repulsions between atoms. We then introduce a Markov chain Monte Carlo framework to learn representations that are invariant to these conditional transformations. Our results show that endowing existing baseline models with these conditional transformations helps improve their performance without sacrificing computational efficiency.
1 INTRODUCTION
The literature on geometric deep learning has achieved much success with neural networks that explicitly model equivariances (or invariances) to group transformations (Cohen & Welling, 2016; Maron et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020). Among applications to physical sciences, group equivariant graph neural networks and transformers have specifically found applications to small molecules, as well as large molecules (e.g. proteins) with tremendous success (Klicpera et al., 2020; Anderson et al., 2019; Fuchs et al., 2020; Hutchinson et al., 2021; Satorras et al., 2021; Batzner et al., 2021). Specifically, machine learning for proteins (and 3D macromolecular structures in general) is a rapidly growing application area in geometric deep learning, (Bronstein et al., 2021; Gerken et al., 2021). Traditionally, proteins have been modeled using standard 3D CNNs (Karimi et al., 2019; Pagès et al., 2019), graph neural networks (GNNs) (Kipf & Welling, 2016; Hamilton et al., 2017), and transformers (Vaswani et al., 2017). More recently, several works Jing et al. (2020; 2021); Jumper et al. (2021); Hermosilla et al. (2021) have enriched the above models with neural networks that are equivariant (invariant) to transformations from the Euclidean and rotation groups. While equivariance (invariance) to the transformations in these groups are necessary properties for the model, unfortunately, they are limited to only capture rigid transformations of the input object.
However, these models may not yet account for all invariances of the input pertinent to the downstream task. And the transformations they are invariant to do not depend on the input. For instance, invariance to the Euclidean group restricts the protein representation to act as if the protein were a rigid structure, regardless of the protein under consideration. However, treating proteins as rigid structures may not be optimal for many downstream tasks. And different proteins may have different types of conformations (protein 3D structures with flexible side chains) (Harder et al., 2010; Gainza et al., 2012; Miao & Cao, 2016) . The existing rigid body assumption in protein representations may hurt these methods in datasets & downstream tasks that require protein representations to be invariant to a specific set of protein conformations. For example, for most proteins, regardless of their side chain conformation (as long as viable) under consideration – their protein fold class/ other scalar properties remain the same, their mutation (in)stability remains unaltered, protein ligand binding affinity (apart from changes at the ligand binding site) remain the same, etc.
In light of this limitation of current methods, a question naturally arises: Is it possible to learn conformer-invariant protein representations?
Our Approach: We propose a representation method where the set of symmetries in the task are input-dependent, which we denote conditional invariances. For our specific application, we model every protein as a directed forest, where every amino acid forms a directed tree. We leverage the constructed directed forest of the protein to sample viable protein conformations (where viability is checked with Protein structure validation tools such as Molprobity (Davis et al., 2007; Chen et al., 2010)), which is additionally coupled with a Markov chain Monte Carlo (MCMC) framework to create data augmentations to train the neural network to learn conformer invariant representations.
Our contributions can be summarized as follows: • We provide guiding principles for defining conditional invariant representations as inductive biases,
which serves as a generalization of group invariant neural networks. • We then provide a principled strategy to sample conformations for any given protein from the
support of its protein conformer distribution (which consists of all its viable protein conformations) which captures their true flexibility. Viable conformations respect domain specific constraints such as those on dihedral angles, steric repulsions, among others. This is particularly useful in the scenario when the set of viable transformations is different across different proteins.
• Further, we develop an MCMC-based learning framework which guides the sampling of protein conformations such that the (asymptotic empirical) average representation obtained by all viable conformations of a protein are identical.
• Finally, we perform experimental evaluation of our proposal, where endowing baseline models with our proposed strategy (via an implicit data augmentation scheme) shows noticeable improvements on multiple different classification and regression tasks on proteins.
2 CONDITIONAL INVARIANCES FOR PROTEINS
The symmetry of an object is the set of transformations that leaves the object invariant. The notion of symmetries, expressed through the action of a group on functions defined on some domain, has served as a fundamental concept of geometric deep learning. In this section, we start by defining the concept of input-dependent conditional symmetries and then proceed to specifically tailor these conditional symmetries that are both computationally efficient and useful for representing protein conformations. Some group theoretic preliminaries are presented in the Appendix A.2. Definition 2.1 (Conditionally symmetric-invariant functions). A function f : Ω→ R, is said to be conditionally symmetric-invariant if
f(tx · x) = f(x) ,∀tx ∈ Sx, ∀x ∈ Ω, where Sx is a set of transformations unique to element x and tx : Ω→ Ω.
It is easy to see that conditionally invariant functions are a generalization of group invariant functions where Sx ≡ G ∀x ∈ Ω where G is a group. The above definition is motivated by the fact that representations for proteins may not necessarily be limited to be invariant just to group actions, but to a more general set of protein specific transformations. We detail this motivation next.
A protein molecule is composed of amino acids (say n amino acids), where each atom in the protein belongs to one amino acid ∈ {1, . . . , n}. Excluding the hydrogen atoms, every amino acid contains four atoms known as the backbone atoms (see Figure 1), and other atoms which belong to the side chain. Traditionally, protein structures are solved by X-ray crystallography or cryo-EM and the obtained structure is normally considered a unique 3D conformation of the molecule. Molecules available via the Protein Data Bank (Berman et al., 2000) generally include only unique sets of coordinates to demonstrate the 3D structures, which lead to the unique answer as ‘gold standard’ in structure prediction, such as protein structure prediction competitions - CASP (Kryshtafovych et al., 2019). Under these assumptions protein side-chain conformations are usually assumed to be clustered into rotamers, which are rigid conformations represented by discrete side-chain dihedral angles.
However, works over the past decade (Harder et al., 2010; Gainza et al., 2012; Miao & Cao, 2016) have observed that — while the backbone atoms of all n amino acids in a protein molecule together (i.e. 4n atoms) form a rigid structure, its side chains can exhibit flexible (continuous and discrete) conformations beyond clustered rotamers. The underlying goal of our work is to capture this inherent flexibility of proteins and to ensure different conformers of the same protein get identical representations (More details – Appendix A.10). The above problem of capturing protein conformations is further compounded by the fact that protein conformations are often times unique to the protein - i.e. conformations exhibited by a protein are input (protein) specific. The desiderata, therefore is a model which outputs conformation invariant representations when the conformations (symmetries) exhibited by a protein varies across proteins when access to only a single protein conformer is available.
We denote a protein as a tuple p = (V,Xs,Xp) (data from pdb files) where V is the set of nodes (every atom is a node) in the protein, Xs ∈ Rm×d, d ≥ 1 is the scalar atom feature matrix associated with the atoms, Xp ∈ Rm×3 (where m denotes the number of atoms in the protein) are the positional coordinates associated with the atoms in the protein. Without loss of generality, we number the nodes in V = {1, . . . ,m} following the same ordering of the rows in Xs,Xp.
Definition 2.2 (Rigid Backbone Protein Conformations). For an m-atom protein p with atom positional coordinates Xp ∈ Rm×3 (recall that by definition p contains information about Xp), where Cp ⊂ Rm×3 denotes the set of viable conformations of p, which keeps the positional coordinates associated with the backbone fixed. We use Tp ⊂ Rm×3×3 (where a 3x3 matrix is associated with every single atom) to denote the set of non-isometric transformations of p, which yield the set Cp i.e. ∀cp ∈ Cp, ∃tp ∈ Tp, s.t. for an atom with index i ∈ {1, . . . ,m}, the new position of atom i is Xp[i]tp[i] T = cp[i], Xp[i] ∈ R1×3, tp[i] ∈ R3×3.
Tp forms a set of transformations which acts on the protein atomic coordinates via matrix multiplication. Two elements of Tp (with corresponding matrices of individual atoms and then aggregating for the protein as a whole) may or may not combine via matrix multiplication (as the binary operation) to result in another element of Tp i.e. matrix multiplication is not necessarily closed in Tp. A concrete example of the non group structure of protein conformations is provided in Appendix A.3.
Unfortunately, sampling transformations from Tp to obtain viable protein conformations is an unattainable task, since this would entail first sampling a transformation from Rm×3×3 and then verifying if the transformation is viable (and the vast majority of transformations are not viable). We will address this issue using an MCMC method - which is then combined with MCGD training of the neural network (Sun et al., 2018) to learn conformation invariant protein representations where different viable conformations are obtained via MCMC are used in every batch. Markov Chain Gradient Descent (MCGD) is a variant of Stochastic Gradient Descent (SGD), where the samples are drawn from the trajectory of a Markov Chain. We note that, while there have been prior Monte Carlo and MCMC based techniques for sampling protein conformations (Boomsma et al., 2013; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009), the overarching goals of the existing MCMC methods are significantly different compared to ours. The existing methods define a distribution over the conformations (based on the properties of the bonds, etc.) and then sample highly probable conformations from this distribution. Our method, is much simpler and only requires that the chain being used is ergodic and is invariant to the actual form of the distribution. Therefore, we devise a simple chain whose transitions are easier to sample and don’t require us to compute conditionals of an energy based model. However, we also note that the existing Markov chains can be used as drop-in replacements to sample conformations as part of our framework. Before specifying our MCMC procedure, we specify the invariance requirements for learning protein representations.
A symmetric-invariant function for proteins (per Definition 2.1), should be invariant to (i) transformations which change its atomic coordinates to any of its rigid backbone conformations in Cp (ii) transformations which change the atomic coordinates of all its atoms via rigid body transformations — group actions such as rotations/ translations (iii) a combination of the two above which transforms it into one of its viable conformations and is then further acted upon by rigid body transformations.
3 OBTAINING VIABLE CONFORMATIONS
Before we describe our MCMC procedure to sample atomic coordinates from the set of rigid backbone conformations , we need a way to sample from the local neighborhood of a conformation.
i = SO(3)∀i 6= 1. We note that in protein molecules not all transformations about a
node would be allowed due to steric repulsions between atoms as well as potential overlaps of atoms.
3.1 A SIMPLE PROTEIN CONFORMATION SAMPLING STRATEGY
Akin to Irbäck & Mohanty (2006), to efficiently sample a candidate conformation from Cp, we will follow a multi-step approach: First, we (i) construct a directed tree for every amino acid in the protein to capture the inherent flexibility in protein structures. Then, we (ii) leverage a directed forest of the protein (described next) to transform its atomic coordinates and check for viability.
Part 1. Directed forest construction. Using the input protein molecule, we construct a directed forest using the following three step procedure: 1. Each amino acid in the protein gets its own directed tree. The atoms in the amino acid’s backbone
form the base (root) nodes (i.e., there can be multiple base nodes in every amino acid). This is illustrated as node 1 in Figure 2’s tree. The set of base nodes of all the amino acids in the protein are jointly referred to as the base nodes of the protein (or equivalently of the directed forest).
2. The atoms adjacent to the amino acid’s backbone via a covalent bond become their immediate children in the tree. For example, the node SC1 forms the child of αC in the Figure 1.
3. The other covalent bonds of the children establish the grandchildren of the root, and so on, until all the atoms in the amino acid are a part of the tree. For example, SC2 and SC3 become the children of the node SC1 in the directed tree constructed for Figure 1.
Figure 2 is an illustration of some directed tree constructed with the above procedure, where node 1 is the root node, with nodes 2, 4 as its children and so on. It is important to note that, the above procedure ensures that in the constructed tree, each (non-root) node has a single parent and may have arbitrarily many children. Further, cycles/rings which are present in the molecule are broken on the basis of bond length, i.e., larger bonds are chosen before smaller bonds. Ties between same-length bonds are broken arbitrarily. While multiple directed forests can be created for a given protein since bonds of equal length are broken arbitrarily, in our implementation, given a protein, the tree is fixed because the directed forest for every protein is created exactly once (in the preprocessing step) - where the ties are broken deterministically based on their ordering in the “pdb file".
Next, we shall look at how the atomic coordinates are transformed using the directed tree.
Part 2. Conditional transformations of atomic coordinates. Let Xp be the available input atomic coordinates of the protein. To obtain a new candidate protein conformation (say X ′p ∈ Rm×3), we sample uniformly, one of the directed trees from the forest. Next, we transform the atomic coordinates of the subset of atoms in this directed tree. This set is denoted by A ⊂ V . That is, in the candidate conformation, X ′p[i] = Xp[i], ∀i ∈ V \A — or equivalently tp[i] = I, ∀i ∈ V \A. Starting with the root node, we use breadth first search to traverse the directed tree and update all the descendants of the node currently being considered. For the backbone atoms, we leave Xp[i] = X ′p[i] (or equivalently tp[i] = I), i.e., atomic coordinates are unchanged for backbone atoms. Next, we use pointed sets to describe approximate transformations on a given protein through a directed tree .
Definition 3.1 (Pointed sets). A pointed set is an ordered pair (X,x0), where X is a set and x0 ∈ X is called the basepoint.
For instance, let Mi = {Xp[j] : j ∈ Ni}, where Ni is the set containing i and all its descendants in its directed tree. Then, (Mi,Xp[i]) is a pointed set.
Let the parent of node i be denoted by ◦ and let G(g◦·Xp[◦])i (in practice, SO(3)) be the group from which actions gi ∈ G (g◦·Xp[◦]) i are sampled uniformly — that can transform the positional coordinates of node i and its descendants about its parents in the directed tree, where g◦ ·Xp[◦] is the transformed atomic coordinates of the parent of node i (we use BFS - so a top down approach). In the case of the root node (for example 1 in Figure 2), the actions are just drawn from Groot (or G1 in Figure 2).
The associated transformations of the atomic coordinates of the atoms in Ni can be given by the mapping hi : (Mi,Xp[i])→ (R|Ni|×3, g◦ ·Xp[◦] + gi · (Xp[i]−Xp[◦])) where the coordinates of node j ∈ Ni are updated as: X ′p[j]← g◦ ·Xp[◦] + gi · (Xp[j]−Xp[◦]) (1) If the node j 6= i, its coordinates, i.e., X ′p[j] can be further updated as we traverse the tree. When G
(g◦·Xp[◦]) i = SO(3) ∀i ∈ V , every node in the directed tree can be rotated about its parents as
shown in Figure 3 (Appendix A.4) - which is the side chain of the amino acid shown in Figure 1.
Algorithm 1 Sampling a viable conformation of cp 1: Input: Conformation cp of protein p 2: Obtain the directed forest corresponding to the protein with n directed trees where n is the
number of amino acids in the protein. 3: repeat 4: Sample y ∼ UNIF({1, 2, · · · , n}) and obtain the corresponding directed tree 5: Traverse through directed tree via BFS and update atomic coordinates via Equation (1) to
obtain c′p a candidate conformation of cp— where group actions are always sampled uniformly from SO(3).
6: Check if c′p is a valid protein conformation via protein structure validation tools 7: until c′p is a valid conformation 8: Return: c′p, a valid conformation of cp
However, each candidate X ′p obtained via the above transformation process may not belong to Cp. To that end protein structural validation tools may be used to check the validity of these candidates. Molprobity (Davis et al., 2007; Chen et al., 2010; Williams et al., 2018) is such a tool,which outputs scores corresponding to different metrics (such as number of dihedral (torsion) angles outside the allowed threshold, number of atom-atom clashes arising due to steric repulsions, etc) which can be compared with the originally provided conformation Xp. Note, that while Molprobity is not perfect and we may end up developing models more invariant than just to viable conformations, alternatively (more precise) tools including those which allow for the evaluation of molecular forcefields, can be used to check validity. Additionally, the goal of this work is not to use Molprobity/ other protein structure validation tools as a part of generative models and a more invariant model does not necessarily affect performance on unseen but valid protein test data.
Our algorithm to sample viable conformations is summarized in Algorithm 1. The repeat-loop proposes a conformation in the neighborhood of the current conformation cp which is only accepted when the proposed conformation c′p is a valid conformation. As such it is an acceptance-rejection sampling algorithm. We will see later that the actual sampling probability distribution is not required to be known by our procedure. Our MCMC procedure defined next only requires Algorithm 1 to follow certain conditions which we argue it follows.
3.2 SAMPLING CONFORMERS VIA MCMC
We now describe the MCMC procedure that, starting at any conformer configuration c(0)p ∈ Cp we will, in steady state, sample conformations according to a unique stationary distribution which depends on the protein p, where Cp is the set of all viable conformations of p as defined in Definition 2.2.
To that end we first define the transition kernel of the Markov chain. Algorithm 1 samples a conformation c′p in the neighborhood of a given conformation cp using a directed forest. The neighborhoodN (cp) of the conformation cp is loosely defined as the set of all possible conformations c′p which may result from running Algorithm 1 with cp as input. We denote the probability distribution induced by Algorithm 1 as the transition kernel κ(c′p|cp) which has support N (cp). Definition 3.2 (Conformer Sampling Markov Chain (CSMC) Φp). We define the conformer sampling Markov chain Φp as a time-homogenous Markov chain over the state space Cp with transition kernel
κ as defined above, where Cp is the set of valid conformations associated with given protein p as defined in Definition 2.2.
The consistency of our learning procedure does not depend on the precise definition of κ. However, our procedure relies on the fact that any conformer c′p ∈ Cp can be sampled by κ in a finite number of steps, starting from a conformer cp. We use this fact to show that the chain Φp converges to a unique stationary distribution regardless of the initial conformer. (All proofs in Appendix A.5). Proposition 3.3. Given the CSMC Φp from Definition 3.2 whose transitions are governed by κ which is implicitly defined by Algorithm 1, for any pair of conformers cp, c′p ∈ Cp, there exists τp < ∞, independent of cp, such that P τp Φp (cp, c ′ p) > 0, where P τp Φp is the τp step transition probability.
Next, we show the existence of a unique steady state distribution of our Markov chain. Proposition 3.4. The CSMC Φp defined in Definition 3.2 is uniformly ergodic if Proposition 3.3 is satisfied. Specifically there exists a unique steady state distribution πp such that for all cp ∈ Cp, ‖PnΦp(cp, ·)− πp(·)‖ ≤ C R
n, where C <∞ and R < 1 are constants that depend on Φp, PnΦp is the n step transition probability and ‖ · ‖ is the `1 norm.
Given that we now have the ability to draw samples from a Markov chain which achieves a unique stationary distribution πp on Cp, we shall leverage this next, in our learning framework to learn conformer invariant representations of proteins.
4 LEARNING FRAMEWORK
We shall employ a learning strategy, where we use the viable conformations obtained via the Markov chain to learn a function which outputs conformer invariant representations.
Let D = {(xj , yj)}Nj=1 be the input data (where we have a single conformer for every protein in the dataset). We shall consider a single data point (xj , yj) henceforth, where xj = (V,Xs,Xp) is the input protein. We consider a supervised learning setting where yj is the associated target. We consider both classification and regression tasks.
Let Cj = {cji} be the set of viable conformations of protein xj . We only consider viable conformations which defer only in the atomic coordinates matrix Xp. For a given protein xj , we shall use xji to denote protein xj but with Xp of the original protein modified by cji , and use Sj = {xji} to denote the set of all viable xji i.e. the state space.
Let f : Ω → Rd, d > 0 be any function with learnable parameters θf , (for e.g. any neural network model such as 3D CNN, GNN, Transformer, LSTM, etc.) which takes a protein (in the form (V,Xs,Xp)) as input and outputs protein representations. The function f need not necessarily output conformer invariant representations of the protein xj . Then, a simple way to obtain conformer invariant representations of protein xj (apart from using trivial functions such as a constant function or function independent of Xp) is computing its expected value over all its viable conformations. We shall denote f to denote a function which outputs conformer invariant representations of a protein.
f(xj ; θ f ) = EX∼πj [f(X; θf )] (2)
where πj(·) is the steady state distribution of the Markov chain associated with the protein xj . As such, f(xj ; θf ) = f(x′j ; θ
f ) for any viable protein conformation x′j ∈ Sj . Subsequently, to learn an optimal f (which we denote by f?), we wish to minimize the loss, defined as follows:
L(D; θf , θρ) = 1 N N∑ j=1 L̂(yi, ρ(f(xj ; θ f ); θρ))
= lim k→∞
1
N N∑ j=1 L̂(yi, ρ( 1 k k∑ i=1 f(xij ; θ f ); θρ)) (3)
where L̂ is a convex loss function e.g. cross entropy loss, ρ is some differentiable function with parameters θρ (in practice, an MLP) and lim
k→∞ 1 k
∑k i=1 f(x i j ; θ f ) can be employed as an asymp-
totically unbiased and consistent estimate of f where xij is the i th sample from the Markov chain corresponding to the protein xj
Since Equation (3) is computationally intractable, we employ a surrogate for the loss given by:
J(D; θf , θρ) = lim k→∞
1
N N∑ j=1 1 k k∑ i=1 L̂(yi, ρ(f(x k j ; θ f ); θρ)) (4)
Observe that in Equation (4), the expectation over the conformations is now outside the L̂ and ρ functions while still remaining conformation invariant (however, the optimal parameters corresponding to minimizing J are different from minimizing L). Following (Murphy et al., 2019b;a), we note that, when ρ is the identity function, Equation (4) serves as an upper bound for Equation (3) via Jensen’s inequality. However, learning representations by averaging over multiple conformations (in training) is still computationally expensive (back-prop and high variance issues). Next, we show how MCGD (Sun et al., 2018) can be leveraged to make this tractable (more details in Appendix A.10).
Making the case for simplicity, we next show convergence properties using the full batch setting. Note that is procedure and proofs are equally applicable in the mini-batch setting with minor modifications.
Full-batch training: We consider the data as a single batch B = {(x1, y1), ..., (xj , yj), ..., (xN , yN )} and for a data point (xj , yj), denote the corresponding steady state distributions of the corresponding Markov chains using πj and the state space as Sj .
We denote all learnable parameters of ρ ◦ f as θ ≡ (θf , θρ) and consider θ ∈ Θ ⊆ Rm, m > 0 such that the function ρ ◦ f may be non-convex for some values of θ. We consider a sequence of step sizes (γk)∞k=0 which satisfy:∑
k γk = +∞, ∑ k ln k · γ2k < +∞ (5)
and follow a gradient scheme where parameters are updated as:
θk = θk−1 − γkZk, k > 0 (6)
where Zk = 1N N∑ j=1 ∇θL̂(yj , ρ(f(xk−1j ; θ f k−1); θ ρ k−1)) and x k−1 j is the k − 1 th sample from the Markov chain corresponding to the data point (xj , yj) and employ the Markov chain gradient descent procedure (Sun et al., 2018) where the loss in epoch k, k > 0 of training is obtained by
Ĵk = 1
N N∑ j=1 L̂(yj , ρ(f(x k−1 j ; θ f k−1); θ ρ k−1)).
and minimizes the surrogate loss given in Equation (4) when k is sufficiently large so that the Markov chain has converged to its unique steady state distribution.
Next, we list the set of assumptions we make to ensure convergence of our conformer invariant MCGD procedure. Assumption 4.1. We make the following assumptions:
1. For any θ ∈ Θ and xkj ∈ Sj , the function f is differentiable ∀j 2. supθ∈Θ,xkj∈Sj{||∇θ ρ ◦ f(x k j )||} < +∞ i.e. the gradients are bounded. 3. ∀xkj ∈ Sj , ∀θ1, θ2 ∈ Θ, ||∇θ1 ρ ◦ f(xkj )−∇θ2 ρ ◦ f(xkj )|| < L||θ1 − θ2|| for some L ≥ 0 i.e., the gradients are L−Lipschitz. 4. Exkj∼πj [∇θ ρ ◦ f(x k j )] = ∇θ Exkj∼πj [ρ ◦ f(x k j )]
Next, we formally state the convergence of our optimization procedure to optimal parameters θ? which yield conformer invariant protein representations. Proposition 4.2. Let the step sizes satisfy (5) and the function parameters θ be updated as (6) and Assumption 4.1 hold, then the MCGD optimization enjoys properties of almost sure convergence to the optimal θ.
The use of Markov chain gradient descent procedure to optimize the conformer invariant learning procedure optimizes the objective J in Equation (4), and thus has the following implication on how outputs should be calculated at inference time:
Inference time: We estimate f using an empirical average of f evaluated over conformations (in practice, average final layer [before softmax] representations) visited by the Markov chain :
ˆ fk(xj ; θ f ) = 1
k k∑ i=1 f(x (i) j ; θ f ) , (7)
where x(i)j is the ith state visited by the Markov chain started at xj . Since the Markov chain has a unique steady state distribution, ˆfk(xj ; θ f ) is both asymptotically unbiased and consistent. That is
lim k→∞
ˆ fk(xj ; θ f ) a.s. = f(xj ; θ f ).
5 RESULTS
We evaluate our proposed augmentation procedure on multiple different tasks from the ATOM3D dataset (Townshend et al., 2020). Results are provided in Table 1.
TASKS ON ATOM3D DATASETS
ATOM3D (Townshend et al., 2020) is a unified collection of datasets concerning the 3D structure of proteins. These datasets are specifically designed to provide a benchmark for ML methods which operate on 3D molecular structure, and represent a variety of important structural, functional, and engineering tasks. As part of our evaluation, we perform the (a) Protein Structure Ranking (PSR) (b) Ligand Efficacy Prediction (LEP) (c) Protein Mutation Stability Prediction (MSP) (d) Ligand Binding Affinity (LBA). We describe each of the tasks in detail in Appendix A.7. For all the datasets and tasks we report the same metrics as proposed by the authors in Townshend et al. (2020).
MODEL, BASELINES AND DISCUSSION
We endow the vector gated GVP-GNN model (Jing et al., 2021), a GNN using GCN (Kipf & Welling, 2016) layers and the E(n) GNN Satorras et al. (2021) with conditional transformations from our proposed MCMC method. It is important to note that when the positions of the atoms are altered the protein graph which are inputs to the base encoders are changed appropriately in every epoch. As an ablation study, we also provide a strategy where all the transformations are created from “gold standard" Xp rather than via the MCMC method, i.e., the MCMC is restarted at every epoch during training in the Appendix.
Looking at Table 1, we note that models augmented with our proposed conformer invariance framework, in general, outperforms the baseline models in multiple tasks for which conformer invariance is a requirement. In the LEP task, the proposed addition outperforms the baseline models for the E(n) GNN and the GVP-GNN, but not for the GNN(GCN). The LEP result for the GNN(GCN) is an oddity here, where a GNN which doesn’t explicitly incorporate any atomic positional information outperforms all other models with information about atomic coordinates.
Additional results and ablation studies are presented in Appendix A.9.
COMPUTATIONAL COMPLEXITY & SCALABILITY:
The directed forest for every protein is computed only once as a preprocessing step (a fixed ≈ 5-10 minutes per dataset). At every epoch, for a given protein, we select only one among all its constituent amino acids & perform a sequence of 3× 3 matrix multiplications to obtain a new conformer. If k denotes the number of atoms in the side chain of an amino acid, and height of a directed tree is in the order of log(k), the computational complexity to obtain a conformation is O(k log k) (where k ≈ 15
in avg in our datasets), & this can be run in parallel for all proteins in a mini-batch. Molprobity (run in parallel again, currently runs externally on a web server via a command line call) adds 2s delay per minibatch (we could reduce to milliseconds if we integrated Molprobity into our code). The rejection rate from Molprobity is also very small (less than 1 reject in average across 100 molprobity calls). As an example, training (100 epochs) GVP-GNN - MSP requires ≈ 45 min, and LBA requires ≈ 34 min. In both cases our method requires an additional 4 min overhead. A complete training time table (on a Nvidia Tesla V100 GPU) for all models & datasets can be found in Appendix A.11.6 RELATED WORK Here, we present a high level summary of works related to ours. A more detailed comparison to the works mentioned below and others) are presented in Appendix A.6.
Group Equivariant Neural Networks: Group equivariant and invariant neural networks (Cohen & Welling, 2016; Lenssen et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020; Hutchinson et al., 2021; Fuchs et al., 2020; 2021; Dehmamy et al., 2020) help capture discrete and continuous symmetries of elements (e.g. images) by introducing group theoretic constraints as inductive biases in the neural network. While group equivariant neural network capture global symmetries, local symmetries of manifold spaces can be captured via gauge equivariant networks (Cohen et al., 2019; De Haan et al., 2020). (Lenssen et al., 2018; Gerken et al., 2021; Bronstein et al., 2021) provide a complete review of the theoretical aspects and applications of group equivariant neural networks.
Graph Neural Networks: Graph Neural Networks (Kipf & Welling, 2016; Hamilton et al., 2017; Battaglia et al., 2018; Xu et al., 2018) have gained renewed focus over the past few years and have found applications in recommender systems, biology, chemistry, and many other real world problems which can be formulated as graphs and currently serve as the state of the art in majority of node and graph classification/ regression tasks. Graph neural networks work on the principles of permutation equivariance/ invariances (also groups) and have exploited a message passing framework to learn powerful and expressive representations of nodes/ graphs.
Group Equivariant Graph Neural Networks: These networks combine continuous symmetries (lie groups) with permutation equivariances and has found applications with resounding success on small molecules (Anderson et al., 2019; Klicpera et al., 2020; Satorras et al., 2021; Batzner et al., 2021) which exhibit rigid body characteristics. Employing (Farina & Slade, 2021), would make the neural network excessively invariant and allow the protein to be more flexible (allows unviable conformations) than it truly is. More recently, they have also been applied to learning representations of proteins, which is discussed below.
Monte Carlo and MCMC Methods for Sampling Protein Conformations: There has been a lot of prior work – (Boomsma et al., 2013; Olsson et al., 2013; Antonov et al., 2016; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009) which sample protein conformations by either defining/inheriting a distribution over the conformations (majorly based on the properties of the bonds, etc.) These methods can seamlessly be plugged into our framework as long as the MCMC is ergodic.
Neural Networks for Representation Learning of Proteins: Protein representation has gained a lot of attention especially with the tremendous successes of Alphafold and Alphafold2. A variety of neural network architectures including 3D CNNs, LSTM’s and Transformers (treating the protein as a sequence) as well as graph neural networks have been employed to exploit the rigid body symmetries of proteins (Karimi et al., 2019; Pagès et al., 2019; Ingraham et al., 2019; Strokach et al., 2020; Baldassarre et al., 2021; Hermosilla et al., 2021; Jing et al., 2020; 2021).
Generative Models: Conformation generation models (Mansimov et al., 2019; Simm et al., 2020; Ganea et al., 2021; Xu et al., 2021b;a; Shi et al., 2021; Luo et al., 2021) have also gained attention recently with the goal to predict 3d structure of molecules given their 2d structure - our objective in this work is very different, but can be used to improve predictions of the aforementioned models.
7 CONCLUSIONS
This work addresses the limitations of current protein representation learning methods which are unable to learn conformer invariant representations — and hence unable to capture the inherent flexibility present in protein side chains pertinent to many downstream tasks. To address these, we introduced conditional transformations to capture protein structure, while respecting the restrictions posed by constraints on dihedral (torsion) angles and steric repulsions between atoms. Subsequently, we introduced a Markov chain Monte Carlo based framework to learn representations that are invariant to these conditional transformations.
A APPENDIX
A.1 COMMONLY USED ACRONYMS IN THE PAPER
1. G - group
2. Sx - Set of transformations unique to element x
3. tx - Transformation action which transforms the element x
4. m - Number of atoms in the protein under consideration
5. n - Number of amino acids in the protein under consideration
6. Xs - Matrix of Scalar features associated with atoms in the protein p
7. Xp - Matrix of atomic coordinates associated with atoms in the protein p
8. Cp - Set of conformations of protein p
9. SO(3) - Group of all rotations in 3D space
A.2 GROUP THEORY PRELIMINARIES
Definition A.1 (Group). A group is a setG equipped with a binary operation · : G×G→ G obeying the following axioms:
• for all g1, g2 ∈ G, g1 · g2 ∈ G (closure).
• for all g1, g2, g3 ∈ G, g1 · (g2 · g3) = (g1 · g2) · g3 (associativity).
• there is a unique e ∈ G such that e · g = g · e = g for all g ∈ G (identity).
• for all g ∈ G there exists g−1 ∈ G such that g · g−1 = g−1 · g = e (inverse). Definition A.2 (Group invariant functions). Let G be a group acting on vector space V . We say that a function f : V → R is G-invariant if f(g · x) = f(x) ∀x ∈ V, g ∈ G. Definition A.3 ((Left) Group Action). For a group G with identity element e, and X is a set, a (left) group action α of G on X is a function α : G×X → X that satisfies the following two conditions:
1. Identity: α(e, x) = x, ∀x ∈ X
2. Compatibility: α(g, α(h, x)) = α(gh, x)
We will use a short hand of g · x for α(g, x) when the action being considered is clear from context.
A.3 EXAMPLE DEMONSTRATING NON GROUP STRUCTURE OF A SET OF PROTEIN CONFORMATIONS
Consider X to be a protein with n atoms and m amino acids and let the set of viable conformations of X as {X1, X2, . . . Xi, . . . Xp}. Let X1 be the conformation available in our dataset.
In each step of the transition, only a few atoms (<< n) (atoms in a single amino acid of the entire protein- where the protein is made up of multiple hundreds of amino acids (m) in general) are subjected to an action from the SO(3) group here. The positions of all other atoms (outside that amino acid) in the side chain remain unaltered. So, in the Rn×3×3 matrix – most of the 3x3 entries are the identity matrix of 3 dimension.
Let T i1 ∈ Rn×3×3 where i ∈ {1, 2, . . . , p} be the transformation which yields conformation Xi from X1.
Now consider T 21 (subscript of 1 since we start conformation X1) which takes X1 to X2 and T 31 which takes X1 to X3 where T 2 1 and T 3 1 , do not act on the same amino acid in the protein. Now, however, consider the case where performing T 21 and T 3 1 (or the other way around) sequentially would result in a case where atoms in two different amino acids would overlap (or come too
close to each other causing steric repulsion) - therefore resulting in a non viable conformation i.e. a composition of T 21 and T 3 1 acting on X1, would not be present in the set of all allowed transformations T1 = {T 11 , . . . T i1, . . . T p 1 }. Therefore the set T1 is not closed and doesn’t form a group.
Also for two different conformations X1 and X2, their allowed transformation sets T1, T2 will not be identical (in the above example T 31 6∈ T2). It also easy to construct a case where ⋃p i=1 Ti is not a group and all actions from this group acting on any given Xi will not necessarily result in viable/ valid conformation.
A.4 FLEXIBILITY ALLOWED BY OUR PROPOSED MODEL
Our proposed model allows every node in the directed tree to be rotated about its parents. For example, for the side chain shown in Figure 1 (Main Paper), the allowed flexibility is Figure 3.
A.5 PROOFS OF PROPOSITIONS
First, we restate and prove Proposition 3.3.
Proposition A.4. Given the CSMC Φp from Definition 3.2 whose transitions are governed by κ which is implicitly defined by Algorithm 1 as described above. For any pair of conformers cp, c′p ∈ Cp, there exists τp < ∞, independent of cp, such that P τp Φp (cp, c ′ p) > 0, where P τp Φp
is the τp step transition probability.
Proof. Proof by construction. We prove the proposition by showing that one can construct a path (c (1) p = cp, . . . , c (t) p = c′p) such that c (i) p ∈ Cp and κ(c(i+1)p |c(i)p ) > 0, for all 0 < i < t, and t ≤ Tp. The trivial case where cp ≡ c′p is proved since every group contains the identity element — sampling the identity element for every node in the directed tree yields the same conformation. Since we consider only non backbone transforming conformations, for the non trivial case, a maximum of m− 4n atoms can differ in positions between any two conformations - where m is the number of atoms in the protein and n is the the number of amino acids in the protein n. Both m,n are finite and we are dealing with continuous conformers (and continuous group actions about every node — groups are closed under their associated binary action, and SO(3) is path connected). So we can traverse between conformers (until we reach the desired conformer) sequentially in a finite number of steps, by using the constructed directed forest - selecting a amino acid (which doesn’t violate the viability), fixing the positions of all other amino acids in the protein and rotating the side chain atoms in a single conformer to the final desired state. While this process may result in some side chains
being visited multiple times (due to viability constraints), considering continuous conformers and the SO(3) group (which is path connected) ensures we will never reach a state of deadlock. The second condition is satisfied because every group action has an inverse and we only use transformations from SO(3) for every node in the directed trees.
Next, we restate and and prove Proposition 3.4 Proposition A.5. The CSMC Φp defined in Definition 3.2 is uniformly ergodic if Proposition 3.3 is satisfied. Specifically there exists a unique steady state distribution πp such that for all cp ∈ Cp, ‖PnΦp(cp, ·)− πp(·)‖ ≤ C R
n, where C <∞ and R < 1 are constants that depend on Φp, PnΦp is the n step transition probability and ‖ · ‖ is the `1 norm.
Proof. By Proposition 3.3, Φp satisfies Doeblin’s condition as defined in page 396 of Meyn & Tweedie (2012) which states that for cp, c′p ∈ Cp, P Tp Φp (cp, c ′ p) > for some > 0
1. The uniform ergodicity then holds due to Theorems 16.2.3 and 16.2.1 from Meyn & Tweedie (2012).
Next, we restate and prove Proposition 4.2. We also restate the required assumptions for the proposition. Assumption A.6. We make the following assumptions:
1. For any θ ∈ Θ and xkj ∈ Sj , the function f is differentiable ∀j
2. supθ∈Θ,xkj∈Sj{||∇θ ρ ◦ f(x k j )||} < +∞ i.e. the gradients are bounded.
3. ∀xkj ∈ Sj , ∀θ1, θ2 ∈ Θ, ||∇θ1 ρ ◦ f(xkj )−∇θ2 ρ ◦ f(xkj )|| < L||θ1 − θ2|| for some L ≥ 0 i.e., the gradients are L−Lipschitz.
4. Exkj∼πj [∇θ ρ ◦ f(x k j )] = ∇θ Exkj∼πj [ρ ◦ f(x k j )]
Proposition A.7. Let the step sizes satisfy (5) and the function parameters θ be updated as (6) and Assumption 4.1 hold, then the MCGD optimization enjoys properties of almost sure convergence to the optimal θ.
Proof. Given that each protein has an associated time homogeneous Markov Chain with a unique steady state, independent of other proteins, the set of proteins in a mini-batch also form a Markov chain with a unique steady state. We then leverage Corollary 2 (Page 12) of Sun et al. (2018) along with Proposition 4.2 to ensure almost sure convergence to the optimal θ.
A.6 EXTENDED RELATED WORK
Here, we elaborate on the related works section in Section 6
Group Equivariant and Invariant Neural Networks: Group equivariant and invariant neural networks (Cohen & Welling, 2016; Lenssen et al., 2018; Kondor & Trivedi, 2018; Finzi et al., 2020; Hutchinson et al., 2021; Fuchs et al., 2020; 2021; Dehmamy et al., 2020) help capture discrete and continuous groups symmetries of elements (e.g. images, point clouds). In this work, we learn representations which are invariant to symmetries which are not just groups, but to input dependent sets of transformations. To the best of our knowledge, our work is the first to consider conditional (input dependent) invariances. Prior works on learning invariant models have leveraged Monte Carlo procedures (Finzi et al., 2020; Murphy et al., 2019b) to learn to be invariant to transformations of the input. Alternatively, our work constructs a Markov Chain with a unique steady state and leverages MCGD (Sun et al., 2018) to make it computationally tractable. Secondly, the aformentioned approaches are limited to being invariant to transformations from any specified Lie group with a surjective exponential map/ permutation group, while our work is not limited to groups. Thirdly, our theory ensures that every example/ object in the dataset can have a different set of input dependent, conditional transformations
1We note that this is a simplified version of the actual statement which is defined on the σ-algebra over Cp denoted by σ(Cp). Our proof holds when c′p ∈ σ(Cp)
- and in fact can be seen as a generalization of the above works. In fact, the ablation study that we perform (Results provided in Appendix A.9 - Table 4) uses a Monte Carlo estimator and our MCMC procedure yields better performance than the Monte Carlo estimator. While group equivariant neural network capture global symmetries, local symmetries of manifold spaces can be captured via gauge equivariant networks (Cohen et al., 2019; De Haan et al., 2020). Gauge symmetries require the manifold to be smooth – which is not the case for proteins. Moreover, different proteins have different sets of viable conformations, which would not be able to captured by standard gauge equivariant neural networks. (Lenssen et al., 2018; Gerken et al., 2021; Bronstein et al., 2021) provide a complete review of the theoretical aspects and a wide variety of applications of group equivariant neural networks. A more comprehensive theoretical analysis of input dependent conditionally invariant neural networks is planned for future work.
Graph Neural Networks: Graph Neural Networks (GNNs) (Kipf & Welling, 2016; Hamilton et al., 2017; Battaglia et al., 2018; Xu et al., 2018) have gained renewed focus over the past few years and have found applications in recommender systems, biology, chemistry, and many other real world problems which can be formulated as graphs and currently serve as the state of the art in majority of node and graph classification/ regression tasks. Graph neural networks work on the principles of permutation equivariance/ invariances (Murphy et al., 2019a) (also groups) and have exploited a message passing framework to learn powerful and expressive representations of nodes/ graphs. Both molecular graphs (both small molecules and macromolecules) as well as graphs based on intra molecular distances have been used with GNNs, to achieve state of the art for many molecular datasets and tasks (Hu et al., 2020; Morris et al., 2020). Here, we leverage three graph based neural networks as baselines for our model.
Group Equivariant Graph Neural Networks: Group Equivariant GNNs combine continuous symmetries (lie groups such as SE(3), E(3), SO(3)) with permutation equivariances and has found applications with resounding success on small and large molecules (Anderson et al., 2019; Klicpera et al., 2020; Satorras et al., 2021; Batzner et al., 2021). However, the methods are only able to capture rigid body characteristics of molecules and while capturing the above lie group symmetries is also able to capture input depedent transformations. Employing (Farina & Slade, 2021), would make the neural network excessively invariant and allow the protein to be more flexible (allows unviable conformations) than it truly is. More recently, they have also been applied to learning representations of proteins, which is discussed below.
Monte Carlo and MCMC Methods for Sampling Protein Conformations: There has been a lot of prior work – (Boomsma et al., 2013; Olsson et al., 2013; Antonov et al., 2016; Irbäck & Mohanty, 2006; Vitalis & Pappu, 2009) which sample protein conformations by internal coordinate transformations. However, The goals of the existing MCMC methods are significantly different compared to ours. The existing methods define/inherit a distribution over the conformations (majorly based on the properties of the bonds, etc.) and then aim to sample highly probable conformations from this distribution. Our method, is much simpler and only requires that the chain being used is ergodic and is invariant to the actual form of the distribution. We note that the existing Markov chains can seamlessly be used as drop-in replacements to sample conformations as part of our framework as long as the MCMC is ergodic. We consider studying the impact of different MCMC methods (which sample from different distributions) and their influence on the performance in different tasks as important future work.
Neural Networks for Representation Learning of Proteins: Protein representation has gained a lot of attention especially with the tremendous successes of Alphafold and Alphafold2. A variety of neural network architectures including 3D CNNs, LSTM’s and Transformers (treating the protein as a sequence) as well as graph neural networks have been employed to exploit the rigid body symmetries of proteins (Karimi et al., 2019; Pagès et al., 2019; Ingraham et al., 2019; Strokach et al., 2020; Baldassarre et al., 2021; Hermosilla et al., 2021; Jing et al., 2020; 2021). In this work, while we use GNNs and Group Invariant GNNs as a part of the model, we note that we can equally replace them with CNNs, LSTMs, Transformers and other models used for proteins without any change in the underlying theory.
Tree Construction Method for Molecules: Jin et al. (2018), in their work JTVAE, propose
- a procedure to construct tree for molecules. We note that JTVAE is a more general purpose approach for constructing trees from molecules and may not be the best suited for proteins where the backbone atoms are largely observed (experimentally) to be rigid.
Generative Models: Protein conformation generation models (Mansimov et al., 2019; Simm et al., 2020; Ganea et al., 2021; Xu et al., 2021b;a; Shi et al., 2021; Luo et al., 2021) have also recently gained attend where the goal of the model is to predict 3d structure of molecules given input 2d structure - our objective in this work is completely different, but can be used to improve predictions of the aforementioned models. While our model is explicitly not a generative model, our framework can be leveraged towards generative modeling with the help of tools such as noise outsourcing (Chapter 6) (Kallenberg, 2006) and we see this as important future work.
Non-Rigid Body Dynamics: Non-Rigid Body Dynamics of objects has long been studied both by physicists and in the fields of computer vision to understand and capture the geometric deformations of objects (Taylor et al., 2010; Masci et al., 2015). To the best of our knowledge, there exists no prior work in deep learning which captures the non rigidity of protein molecules (which cannot be modeled as Ck manifolds). As important future work, we would like to study the impact of leveraging input dependent conditional invariances for modeling other geometric objects (which are Ck manifolds) as well as images and robotics (e.g. the symmetries for a humanoid is different from that of a tractor).
Unrelated Work with similar names: Non classical and conditional symmetries of solutions to ODE’s, PDE’s have been discussed in the past - these works while they share a similar title, have very little in common as we are not dealing with jet spaces or manifolds (Joseph, 1968; Fushchich & Zhdanov, 1992; Olver & Vorob’ev, 1996).
A.7 DETAILS ABOUT DATASETS AND TASKS
In this section, we describe briefly each of the datasets (and their associated tasks). Information about the splits and license information is provided in Table 2.
PSR: This task utilizes data from the structural models submitted to the Critical Assessment of Structure Prediction competition (CASP - Kryshtafovych et al. (2019) - a blind protein structure prediction competition) to rank protein structures from the experimentally determined structure of the protein. The problem is formulated as a regression task, where we predict the global distance test of each structural model from the experimentally determined structure. As prescribed by the dataset authors, the dataset is split by competition years.
MSP: The goal of this task is to identify mutations that stabilize a protein’s interactions which forms an important step towards the design of new proteins. This task is significant as probing mutations experimentally techniques are labor-intensive. Atom3D (Townshend et al., 2020) derives this dataset by collecting single-point mutations from the SKEMPI database (Jankauskaitė et al., 2019) and model each mutation into the structure to produce a mutated structure. The learning problem is then formulated as a binary classification task where the goal is to predict whether the stability of the complex increases as a result of the mutation. We employ the same splits as suggested by the dataset authors wherein the protein complexes are split such that no protein in the test dataset has more than 30% sequence identity with any protein in the training dataset.
LBA: This task deals with the problem of predicting the strength (affinity) of a candidate drug molecule’s interaction with a target protein. The dataset is constructed using the PDBBind database (Wang et al., 2004; Liu et al., 2015), a curated database containing protein-ligand complexes from the PDB and their corresponding binding strengths (affinities). The task is formulated as a regression task with the goal to predict pK = − log10(K), where K is the binding affinity in Molar units. The splits are created such that no protein in the test dataset has more than 30% sequence identity with any protein in the training dataset.
LEP: The shape of protein impacts whether a protein is in an on or off state which plays an important role in predicting the shape a protein will favor during drug design.This dataset is obtained by curating proteins from several families with both “active" and “inactive" state structures, and model in 527 small molecules with known activating or inactivating function using the program Glide (Friesner
et al., 2004). The task is formulated as a binary classification task where the goal is to predict whether a molecule bound to the structures will be an activator of the protein’s function or not. We use the same split as recommended by the ATOM3D authors.
A.8 EXPERIMENTAL SETUP
The code for the baseline models (GVP-GNN (Jing et al., 2021), E(N) GNN (Satorras et al., 2021) and GNN(GCN) (Townshend et al., 2020; Kipf & Welling, 2016)) were used as provided by the authors (licenses as dictated by the code authors). Our conformer invariance implementation is in PyTorch using Python 3.8. We also leverage networkx to create the directed forests. For all three models we tune the hyperparameters – learning rate (∈ {0.1, 0.01, 0.001, 0.0001}) and mini batch size (∈ {4, 8, 16, 32, 64}). For the E(n) GNN model - since there have been no previous models for the aforementioned protein tasks - we also tune the number of GNN layers (∈ {4, 5, 6, 7}) as a hyper parameter. The experiments were all performed on Tesla V100 GPU’s. For more details refer to the code provided.
A.9 ADDITIONAL RESULTS
In Table 3, we present the results, including additional datasets and tasks, than presented in the main paper. In the LEP task, the proposed addition outperforms the baseline models for the E(n) GNN and the GVP-GNN, but not for the GNN(GCN). The LEP result for the GNN(GCN) is an oddity here, where a GNN which doesn’t incorporate any rigid body transformations outperforms all other models.
In Table 4, we present an ablation study, where we provide a strategy where all the transformations are created from “gold standard" Xp rather than via the MCMC method, i.e., the MCMC is restarted at every epoch during training. From the table, we note that the MCMC method tends to outperform the non MCMC method, which can be attributed to the guarantees it provides to the learning framework.
A.10 CASE AGAINST USING AVERAGE REPRESENTATIONS IN TRAINING AND FOR REQUIRING
The case for the requirement of a single conformer invariant representations – can be seen from the fact that different protein conformations, say X1, X2 of the same protein X , may be seen during
train and test phases. Without conformer invariant representations - this may lead to different representations and therefore different predictions for the same protein (bad).
One may argue, that an approximate conformer invariant neural network can be learned by averaging representations of multiple Monte Carlo conformations during training. However, this is computationally expensive as this would need to back-propagate and update parameters for multiple conformations for every protein in every epoch – bad as this leads to an exponential overhead. On the other hand, our procedure with MCGD, in every epoch uses only one conformation from our Markov Chain and still ensures convergence to optimal parameters. It is important to note that, during inference we still do average representations over multiple conformations (from our Markov chain) to output conformer invariant representations - which again due to MCGD training procedure and the unique steady state of our Markov Chain, ensures confomer invariant representations are achieved – which yield better performance than current state of the art on multiple datasets and tasks. As stated in the main paper, our framework can work well with other Markov chains as well. We chose the proposed chain purely because it is simpler than the existing methods (in that it doesn’t require a distribution over conformations to be assumed) and as such is easier to sample from.
A.11 TRAINING TIME
In Table 5, we present the training time (for 100 epochs) for each of the baseline models as well as well for models with our proposed conformer invariance framework addition. From the table, we note that, on an average the increase in training time over 100 epochs is ≈ 3-4 minutes which is negligible in comparison to the training time (without the proposed framework addition). | 1. What is the focus and contribution of the paper regarding protein structure representation?
2. What are the strengths of the proposed method, particularly in its principled approach to sampling conformations?
3. What are the weaknesses of the paper, especially regarding its assumptions and reliance on arbitrary rules?
4. Do you have any concerns about the effectiveness of the proposed approach compared to other methods, such as a prior MCMC sampling approach or a simpler data augmentation baseline?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper aims to learn a conditional invariant representation for protein structure by introducing a data augmentation approach. The approach augments the protein 3D conformations by applying conditional invariant transformations to sub-structures in protein with a novel Markov chain Monte Carlo (MCMC) sampling. The paper demonstrates improved performance over a GNN baseline that inputs the initial 3D structure of protein without data argumentation.
Strengths And Weaknesses
Strength:
The directed forest construction and conditional transformation of sub-structures provide a principled approach to sample conformations given a protein structure.
The authors formally show that the MCGN optimization ensures the convergence of conformer invariant learning procedure.
Weakness:
The approach assumes that the protein molecule can be decomposed into a directed forest structure. This assumption is often not true for realistic proteins. The authors use some arbitrary rules to deal with cycles/rings: “cycles/rings which are present in the molecule are broken on the basis of bond length, i.e., larger bonds are chosen before smaller bonds. Ties between same-length bonds are broken arbitrarily.”The resulting conformations are likely unrealistic due to these arbitrary rules.
Consequently, the authors use a validity checker (Molprobity) to filter invalid conformations. It means that the sampled structures largely depend on the validity checker. It is unclear if the conditional invariant sampling approach provides benefits compared with a prior MCMC sampling approach. It will also be beneficial if the authors can report the acceptance rate of the MCMC sampling.
The performance improvement over a GNN baseline is marginal (Table 1). It is unclear if the proposed approach can outperform a much simpler data argumentation baseline, e.g., random Gaussian noise + validity filtering.
Have the authors considered leveraging substructures similar to the JTVAE paper by Jin et al. (ICML 2018) for the construction of the directed forest?
Clarity, Quality, Novelty And Reproducibility
The paper is well-written and easy to follow.
The approach to sample protein conformations is novel at the extent of the reviewer’s knowledge.
The authors didn’t provide a code to reproduce their results. |
ICLR | Title
Lower Bounds on the Robustness of Fixed Feature Extractors to Test-time Adversaries
Abstract
Understanding the robustness of machine learning models to adversarial examples generated by test-time adversaries is a problem of great interest. Recent theoretical work has derived lower bounds on how robust any model can be, when a data distribution and attacker constraints are specified. However, these bounds only apply to arbitrary classification functions and do not account for specific architectures and models used in practice, such as neural networks. In this paper, we develop a methodology to analyze the robustness of fixed feature extractors, which in turn provides bounds on the robustness of any classifier trained on top of it. The tightness of these bounds relies on the effectiveness of the method used to find collisions between pairs of perturbed examples at deeper layers. For linear feature extractors, we provide closed-form expressions for collision finding while for arbitrary feature extractors, we propose a bespoke algorithm based on the iterative solution of a convex program that provably finds collisions. We utilize our bounds to identify the layers of robustly trained models that contribute the most to a lack of robustness, as well as compare the same layer across different training methods to provide a quantitative comparison of their relative robustness.
1 INTRODUCTION
The robustness of machine learning models to test-time (evasion) attacks, particularly via adversarial examples, has been studied extensively from an empirical perspective (Croce et al., 2020; Papernot et al., 2016; Li et al., 2020; Biggio & Roli, 2017). A plethora of attacks (Szegedy et al., 2013; Carlini & Wagner, 2017; Bhagoji et al., 2018; Croce & Hein, 2020) and defenses (Madry et al., 2018; Zhang et al., 2019) has been proposed. The resulting arms race between proposed attacks and defenses, has led recent theoretical work (Dohmatob, 2019; Cullina et al., 2018; Mahloujifar et al., 2019; Bhagoji et al., 2019; 2021; Montasser et al., 2019) to focus on understanding the fundamental bounds of learning in the presence of adversarially perturbed inputs, in terms of both the existence of robust classifiers and the sample complexity required to learn them.
A line of research on this front has investigated lower bounds on the robustness attainable by any classifier when classifying data from distributions modified by adversarial perturbations (Pydi & Jog, 2020; Bhagoji et al., 2019; 2021). In particular, information-theoretic lower bounds on the minimum possible robust 0 − 1 and cross-entropy losses have been derived for arbitrary discrete distributions and very general attack models. However, the bounds from these papers are only applicable when considering the set of possible classifiers to be all measurable functions, which does not translate to practice. The classifier architecture and even some portion of the parameters are fixed (such as in transfer learning) in practical approaches to training machine learning models.
In this paper, we leverage this line of work to find lower bounds on the robustness of commonly used, fixed feature extractors. This provides a principled method to compare the robustness of feature extractors obtained by different training methods to arbitrary test-time adversaries. The key question we answer is:
What is the minimum robust loss incurred by any classifier trained on top of a fixed feature extractor?
Two challenges for training robust machine learning models motivate this question. First, since transfer learning is now common practice, it is incumbent upon the model trainer to choose a robust pre-trained feature extractor. Our work offers a principled and quantitative method to choose among
feature extractors obtained from different layers and training methods. Second, we are able to shed light on the impact of common architectural choices, such as activation functions and dimensionality reduction on robustness. This arises from our study of how the lower bound evolves from layer to layer within deep neural networks.
Importance of lower bounds on robustness: To determine classifier-agnostic lower bounds on robustness, earlier work focused on the interaction between points from different classes in the input space when perturbed through the construction of a conflict graph. Minimizing an appropriately defined loss function over this graph determines an information-theoretic lower bound on the loss. Intuitively, the denser the graph is, the higher the lower bound. These classifier-agnostic bounds are able to determine the plausibility of robust classification for a given adversary and dataset, and highlight practically relevant perturbation budget.
1.1 CONTRIBUTIONS
Lower bounds on robustness for fixed feature extractors: We significantly extend prior work by deriving lower bounds that depend on the architecture and weights of a fixed feature extractors. We construct a distance function over the input space that depends on the feature extractor in use. Our results have implications for both analyzing the robustness of already trained classifiers as well as determining feature extractors to use for applications such as transfer learning.
Bespoke algorithms for collision finding: For linear feature extractors such as the first layer of a neural network, we determine exact, closed form expressions to find collisions. Collision finding for non-linear feature extractors is a non-convex optimization problem, for which we propose a custom algorithm. We approach the collision-finding problem with a descent algorithm that solves a sequence of convex optimization problems over polytopes. This algorithm has the structural advantage of maintaining feasibility (for some adversarial budget constraint) at each step. Despite the fact that it does not find a global minimum, the solutions found are useful for obtaining not just approximations but actual lower bounds on optimal adversarial classification loss.
Empirical findings: We utilize our method to find numerical lower bounds on the robustness of fixed feature extractors trained on two different datasets with both fully-connected and convolutional architectures. Our results indicate that the use of dimensionality-reducing linear layers as well as the effective compression induced by ReLU activations lead to significantly less robust networks. Further, we confirm the oft-cited observation that mismatches between the adversary’s budget at train and test time impacts robustness negatively. Taken together, these findings point towards future design considerations for robust models.
2 DERIVING LOWER BOUNDS FOR FIXED FEATURE EXTRACTORS
In this section, we develop a method for evaluating the robustness of a feature extractor for classification in the presence of a test-time adversary. We characterize the optimal adversarial loss achievable by any classifier that uses the fixed feature extractor as its initial layer. This characterization is based on a conflict graph derived from a discrete data distribution and the constraints of the adversary.
Examples and labels: We consider a supervised classification problem with a test-time adversary. We have an example space X and a label space Y = {−1, 1}. Labeled examples are sampled from a joint probability distribution P over X × Y . Test-time adversary: The test-time adversary modifies a natural example subject to some constraints to generate an adversarial example that will be classified Goodfellow et al. (2015); Szegedy et al. (2013); Carlini & Wagner (2017). Formally, the adversary samples a labeled example (x, y) from P and selects x̃ ∈ N(x), where N : X → 2X̃ is the neighborhood function encoding the constraints on the adversary and X̃ is the space of adversarial examples.1 For all x, N(x) must be nonempty. This definition encompasses the `p family of constraints widely used in previous work.
1In most (but not all) settings that have previously been studied, X̃ = X . We believe that making the distinction helps to clarify some of our definitions: their applicability in this general context affects what properties they can be expected to have.
Measuring adversarial loss: We consider ‘soft’ classification functions (or classifiers) that map examples to probability distributions over the classes. These are h : X → ∆Y , where ∆Y = {p ∈ RY : p ≥ 0, ∑ y∈Y py = 1}. We measure classification performance with a loss function ` : ∆Y × Y → R, so the expected performance of a classifier h is E[supx̃∈N(x) `(h(x̃), y)], where (x, y) ∼ P . In the two class setting, h(x̃)−1 = 1−h(x̃)1, so any loss function that treats the classes symmetrically can be expressed as `(p, y) = `′(py). Additionally, `′ should be decreasing. Together, these allow the optimization over adversarial examples to be moved inside the loss function, giving E [ ` ( inf x̃∈N(x) h(x̃)y, y )] . Thus, the adversarial optimization can be analyzed by itself.
Definition 1. For a soft classifier h, the correct-classification probability qv that it achieves on an example v = (x, y) in the presence of an adversary is qv = inf x̃∈N(x) h(x̃)y . The space of achievable correct classification probabilities is
PV,N,H = ⋃ h∈H { q ∈ RV : ∀(x, y) ∈ V, 0 ≤ q(x,y) ≤ inf x̃∈N(x) h(x̃)y } .
Bhagoji et al. (2021) characterized PV,N,H in the case that H is the class of measurable functions X → ∆Y . For a data distribution P that is discrete with finite support V ⊆ X × Y , this allows the minimum adversarial loss achievable to be expressed as an optimization over PV,N,H:
inf h∈H
E(x,y)∼P [
sup x̃∈N(x)
`(h(x̃), y) ]
= inf q∈PV,N,H ∑ v∈V P ({v})`′(qv).
We extend their approach to analyze feature extractors as follows. Given a feature space Z and a fixed, measurable feature extractor f : X̃ → Z , define Hf = {h ∈ H : h = g ◦ f}: an element of Hf is some measureable g : Z → ∆Y composed with f . Our aim is to characterize PV,N,Hf so that we can optimize loss functions over it to evaluate the suitability of f .
Conflict graph: In Bhagoji et al. (2021), the notion of a conflict graph was used to record neighborhood intersections for pairs of points from different classes. When such an intersection exists, it is impossible for any classifier to correctly classify both of those points in the adversarial setting. We extend this notion to apply to the family of classifiers using a fixed feature extractor f . In our setting, a conflict exists between a pair of points when each of them has a neighbor that is mapped to the same point in the feature space.
We call the conflict graph GV,N,Hf , where V ⊆ X × Y . When we apply GV,N,f in order to understand classification of labeled examples with distribution P , we take V to be the support of P . The graph is bipartite: the vertex set is partitioned into parts Vc = V ∩ (X × {c}). The edge set is
EV,N,f = {((u, 1), (v,−1)) ∈ V1 × V−1 : ∃ũ ∈ N(u), ṽ ∈ N(v) such that f(ũ) = f(ṽ)}.
Lemma 1 (Feasible output probabilities). The set of correct classification probability vectors for support points V , adversarial constraint N , and hypothesis classHf is
PV,N,Hf = {q ∈ RV : q ≥ 0, q ≤ 1, Bq ≤ 1} (1)
where B ∈ RE×V is the edge incidence matrix of the conflict graph GV,N,f .
The proof is given in Appendix A and follows the structure of the proof in Bhagoji et al. (2021).
Approximating GV,N,f and PV,N,Hf : If instead of knowing the true conflict graph GV,N,f , we have some subgraph, then we can find a polytope that is a superset of the true PV,N,Hf . If we minimize some expected loss over this proxy polytope, we obtain a lower bound on the optimal loss over Hf . Because subgraphs of the conflict graph lead to valid lower bounds on optimal classification performance, we can use this method to evaluate the quality of a feature extractor f even if exact computation of the conflict graph is computationally intractable.
2.1 DISTANCE INTERPRETATION
The construction of a conflict graph and characterization of the feasible correct classification probabilities from the previous section apply to any feature extractor and neighborhood constraint. In
the most commonly studied settings, the neighborhood constraint arises from a distance function: Nε(x) = {x̃ ∈ X̃ : d(x, x̃) ≤ ε}. This parameter ε is the adversarial budget constraint. For any two examples u and v, a natural quantity to consider is the smallest adversarial budget that would cause the edge (u, v) be appear in the conflict graph:
ε∗(u, v) = inf{ε ≥ 0 : Nε(u) ∩Nε(v) 6= ∅} = inf x̃∈X̃ max(d(u, x̃), d(v, x̃)).
We will call this the distance on X induced by d. When X = X̃ = Rd and d(x, x′) = ‖x−x′‖p, the minimal adversarial budget is simply 12‖u − v‖p. Because the relationship to the distance used in the adversarial budget constraint is so simple, this quantity is not usually discussed independently. This definition generalizes easily to the setting of classification with a particular feature extractor, but the resulting quantity is much more interesting.
Definition 2. The distance induced on X by d and f is the minimum adversarial budget required to create a conflict between u and v after a feature extractor f :
ε∗f (u, v) = inf{ε ≥ 0 : f(Nε(u))∩f(Nε(v)) 6= ∅} = inf ũ,ṽ∈X̃ :f(ũ)=f(ṽ) max(d(u, ũ), d(v, ṽ)). (2)
This reduces to ε∗(u, v) when f is the identity function, or more generally any injective function. Observe that any choice of ũ and ṽ in (2) provide an upper bound on ε∗f (u, v), which is useful for finding a subgraph of GV,N,f and lower bounds on optimal classification performance inHf . Figure 1 illustrates the computation of ε∗f for a simple f that is related to the ReLU function.
Induced distance is not a distance on the feature space: The fact ε∗f is defined onX is essential: it is not possible to interpret it as a distance on Z . For a full explanation of this point, see Appendix B.
3 DISTANCE COMPUTATIONS FOR PRACTICAL FEATURE EXTRACTORS
Throughout the remainder of the paper, we will work in a more concrete setting. We assume that our example space is a real vector space and that adversarial examples are from the same space: X = X̃ = Rn1 . Let B ⊆ X be a nonempty, closed, convex, origin-symmetric set. These conditions imply the zero vector in contained in B. We take the neighborhood of any point to be a scaled, shifted version of this ball: Nε(x) : x + εB. Neighborhood constraints derived from `p norms fit into this class: we encode them by defining B = {δ ∈ Rn1 : ‖δ‖p ≤ 1}, which results in Nε(x) = {x̃ ∈ Rn1 : ‖x̃− x‖p ≤ ε}. We will focus on `2-based neighborhoods, but our algorithms could be adapted to any choice of B over which efficient optimization is possible.
3.1 LINEAR FEATURE EXTRACTORS
Suppose that our feature extractor is an affine linear function of the input example: f(x) = Lx+ k for some matrix L ∈ Rn2×n1 and vector k ∈ Rn2 . Then the distance εf (u, v) becomes
inf{ε ≥ 0 : {k + L(u+ δ) : δ ∈ εB} ∩ {k + L(u+ δ′) : δ′ ∈ εB} 6= ∅}.
Because {(ε, δ) ∈ R1+n1 : δ ∈ εB} is closed convex cone, εf (u, v) can be expressed as the following convex optimization problem:
inf ε subject to δ ∈ εB, δ′ ∈ εB, L(δ − δ′) = L(v − u). Because B is closed and the linear subspace is always nonempty, the infimum is achieved. Also, it is sufficient to consider (δ, δ′) satisfying δ′ = −δ: for any feasible (ε, δ, δ′), the point (ε, (δ − δ′)/2, (δ′ − δ)/2) is also feasible, has the same value, and satisfies the additional constraint. The feasibility of the symmetrized point uses the origin-symmetry of B. The simplified program is
min ε subject to δ ∈ εB, Lδ = L(v − u)/2. Thus the optimal adversarial strategy for creating conflict between u and v is intuitive: produce examples with the same features as the midpoint (u+ v)/2.
If B is the unit `2 ball, there are further simplifications. Consider the singular value decomposition L = UΣV T where we do not include zero singular values in Σ. Then the linear map given by UΣ is injective and can be canceled from the linear constraint on δ. The resulting program is
min ε subject to ‖δ‖2 ≤ ε, V T δ = 1
2 V T (v − u),
the optimal choice of δ is δ = 12V V T (v − u), and εf (u, v) = 12‖V T (v − u)‖2. Observe that this is the norm of a vector in Rn2 , i.e. the feature space, contrasting with our discussion in Section B. However, this is essentially the only case εf (u, v) simplifies into a feature space distance.
3.2 FULLY CONNECTED NEURAL NETWORKS WITH RELU ACTIVATIONS
For this feature extractor architecture, we have f = f (`) ◦ . . . ◦ f (1) where the layer i function is f (i) : Rni → Rni+1 , f (i)(z) = (k(i) + L(i)z)+. Here z+ represents the component-wise positive part of the vector z. Then εf (u, v) is the value of the following optimization problem:
min ε subject to δ ∈ εB, δ′ ∈ εB, (f (`) ◦ . . . ◦ f (1))(u+ δ) = (f (`) ◦ . . . ◦ f (1))(v + δ′). As in the linear case, the minimum exists because the cones are closed and the equality constraint is feasible. In contrast with the linear case, the equality constraint is nonconvex.
The local linear approximation to f (i)(z) around a point z′ is diag(s(i,z ′))(k(i)+L(i)z), where s(i,z ′) is a zero-one vector that depends on the sign pattern of k(i) +L(i)z′: s(i,z ′)
j = 1((k (i) +L(i)z′)j >
0). In other words, s(i,z ′) is the ReLU activation pattern at layer i when z′ is the input to that layer.
Using these linear approximations, the feasible set of (δ, δ′) ∈ Rn1+n1 satisfying the constraint f(u + δ) = f(v + δ′) can be decomposed as a union of polytopes: each activation pattern defines a linear subspace and there are some linear inequalities specifying the region where that activation pattern actually occurs. For a one-layer network, the linear piece for pattern s is
f(x) = diag(s)(k + Lz) for diag(2s− 1)(k + Lz) ≥ 0. Thus one of the polytopes composing the feasible set is (δ, δ′) ∈ Rn1+n1 satisfying
diag(s)(k + L(u+ δ)) = diag(s′)(k + L(v + δ′)),
(2 diag(s)− I)(k + L(u+ δ)) ≥ 0, (2 diag(s′)− I)(k + L(v + δ′)) ≥ 0.
In the one-layer case, the whole feasible region is covered by polytopes where s = s′. Observe that the dimension of the subspace satisfying the linear equality varies with s. When f contains multiple layers, each polytope in the feasible set is defined by a linear equality constraint involving feature vectors together with a linear inequality for the sign of each ReLU input.
We optimize over this feasible set with a descent algorithm: Algorithm 2 details the version for single-layer networks. The method generalizes to multiple layers and is used to obtain our results in Section 4.2). A full description of the general version is in Appendix C. We initialize our search in a polytope that we know to be nonempty because it contains the feature space collision induced by the midpoint of the two examples. Within each polytope, we can minimize the objective exactly by solving a linear cone program. We examine the dual variables associated with the linear inequality constraints to determine whether an adjacent polytope exists in which we can continue the search.
In the one layer case, the cone program that we solve, which depends on (L, k, u, v, s), is
min ε subject to (ε, δ, δ′) ∈ R1+n1+n1 , δ ∈ εB, δ′ ∈ εB, diag(s)L(u+ δ) = diag(s)L(v + δ′),
(2 diag(s)− I)(k + L(u+ δ)) ≥ 0, (2 diag(s)− I)(k + L(v + δ′)) ≥ 0.
Algorithm 1 Descent with midpoint initialization Input: u, v ∈ Rn1 , L ∈ Rn2×n1 , k ∈ Rn2 Output: ε ∈ R, δ, δ′ ∈ Rn1
1: z ← k + 12L(u+ v), ε← 1 2‖u− v‖ 2: for 0 ≤ j < n2 do 3: sj ← 1(zj > 0), 4: end for 5: repeat 6: εold ← ε 7: (ε, δ, δ′, z, z′)← ConeLP(L, k, u, v, s) 8: sold ← s 9: for 0 ≤ j < n2 do
10: if zj > 0 and z′j > 0 then 11: sj ← 1− sj 12: end if 13: end for 14: until sold = s or εold = ε
We need not just the values of the primal solution, but the values of dual variables associated with the linear inequalities in the solution to the dual problem. These are called z and z′ in Algorithm 2. When both zj > 0 and z′j > 0, the input to ReLU j is zero when both f(u+δ) and f(v+δ′) are computed, and the objective function could be decreased further by allowing the input to switch signs. In this case, we move to another polytope based on the new ReLU states and search there.
When we fail to find such pairs of dual variables, either the minimum is in the interior of the current polytope (and thus is supported only by cone constraints), or the minimum is on the boundary of the current polytope but there is no adjacent polytope in which we could continue the search. Thus we end the search.
Algorithm termination, convergence, and complexity: The descent algorithm will terminate in a finite number of iterations and will find a local minimum of the distance function. Since the feasible space is non-convex, any local descent procedure is not guaranteed to find the global minimum. The number of variables in the cone program is proportional to the number of ReLUs in the feature extractor plus the dimension of the example space. The time-complexity of a single iteration of the search is polynomial in the input dimension and number of ReLUs. The feasible region is the union of a finite but exponentially large number of convex polytopes. Due to the requirement that each iteration make progress, it is impossible to ever revisit a polytope. Thus, termination is guaranteed, but no polynomial bound on the number of iterations is available. We suspect that as in the case of the simplex algorithm for linear programming, input data resulting in an extremely large number of iterations may exist, but would have a delicate structure that is unlikely to arise in practice.
3.3 FURTHER COMMON LAYERS
Convolution: It is straightforward to represent a convolutional layer as a linear layer by constructing the appropriate Topelitz matrix, and thus our method applies directly.
Batch-normalization: Batch norm layers are simply affine functions of their inputs at test-time, and the transformations they induce can be easily included in a linear layer. Our method thus applies.
Max pooling: Max-pool layers, like ReLUs, are piecewise linear functions of their inputs, so constraints coming from the equality of max-pool outputs also lead to feasible regions that are the union of polytopes. This is a simple extension left for future work due to a lack of space.
Other activation functions and architectures: Injective activation functions such as Leaky ReLU, ELU and sigmoid will not lead to additional collisions. Further, since they are not piecewise, a
different descent algorithm would be needed. We note that our framework cannot find collisions in networks with both forward and backward flows, such as those with attention.
4 EVALUATION
In this section, we demonstrate how the methodology outlined in the previous sections for determining the robustness of a fixed feature extractor can be used in practice. Our detailed results provide insights on how the overall robustness of a neural network evolves through its different layers and is impacted by the training procedure used.
4.1 FROM COLLISION-FINDING TO LOWER BOUNDS
We find lower bounds on robust loss over the training set by minimizing a loss function overGV,N,f .
Vertices V from training data: The vertex set V is a representation of the training data. We use 2-class problems derived from MNIST (LeCun & Cortes, 1998) or Fashion MNIST (Xiao et al., 2017) as in previous work (Pydi & Jog, 2020; Bhagoji et al., 2019; 2021).
Neighborhood function N : We use the common `2-norm ball constraint (Madry et al., 2018), in which the adversary’s strength is parametrized by the radius of the ball 2.
Fixed feature extractors f : We use the composition of the layers of convolutional and fullyconnected DNNs as our fixed feature extractors (details in Appendix E). The first layer of any of these networks (before a non-linear activation is applied) behaves as a linear feature extractor. Subsequent layers behave as non-linear feature extractors. These networks are trained using one of standard cross-entropy loss minimization or robust training with either PGD-based adversarial training (Madry et al., 2018) or the TRADES loss (Zhang et al., 2019).
Edge set E from collision finding: Algorithm 2 provides a greedy, iterative method to find collisions between feature representations of pairs of points from different classes. Each successful collision is an edge in the bipartite conflict graph. Each iteration of this algorithm can be cast as a convex program in the form of a linear cone (as detailed in Appendix D). We solve this linear cone program using CVXOPT (Andersen et al., 2013). Since we are working with an `2-norm constrained adversary, we can speed up computation by projecting the inputs onto the space spanned by the right singular vectors of the first linear layer. For linear convolutional layers, we cast the convolution operation as a matrix multiplication to enable the use of the closed form derived in Section 3.1. We also experimented with a modification of the Auto-PGD (Croce & Hein, 2020) algorithm to find collisions at deeper layers with fixed budgets as done in Engstrom et al. (2019). However, this was less effective and more expensive at finding collisions, so all further results use Algorithm 2.
Computing the lower bound from the conflict graph: The conflict graph determines the set of possible output probabilities for each vertex (training data point) for the optimal classifier. Minimizing the 0 − 1 and cross-entropy losses over this graph thus results in a lower bound over these losses since any classifier must incur a larger loss than the optimal classifier, by definition. We use the method from Bhagoji et al. (2021) over the conflict graphs we derive from deeper layer representations. Results in the main body are for the cross-entropy loss (others in Appendix F).
4.2 RESULTS
Interpreting lower bound values: The lower bound on the input space representations is computed by considering all possible classifiers and is thus the best possible, while bounds for other representations are computed by restricting the space of possible classifiers. Intuitively, the latter will always be larger than or equal to the former. The key metric to understanding the robustness of feature extractors is to check how much larger the corresponding lower bounds are. The magnitude of this difference measures the contribution of that layer to the overall network’s lack of robustness.
Robustness of linear layer representations: Using the procedure outlined in Section 3.1, we find collisions after the first linear layer and construct a conflict graph for several models (Fig. 2) for
2We are aware of the critiques of this constraint (Gilmer et al., 2018) and only use it to compare with previous work.
How does robustness evolve across layers? Having looked at the how robust linear layer representations are, it is pertinent to consider other representations derived from other layers deeper in the network. Of particular interest are post-ReLU representations for models with benign (Fig. 3) and robust training (Fig. 4). We find that the first ReLU activation layer contributes significantly to an increase in the lower bound for both benign and robustly trained networks. We observe that postReLU representations tend to be sparse, leading to an easier search problem and a larger number of collisions. The second linear layer, on the other hand, does not lead to much additional increase in the loss. This is a property of the particular architecture we use, and a smaller linear layer is likely to lead to larger increases in loss. Finally, the second set of ReLU activations does have a measurable impact on robustness, particularly for the benign network. The impact of layer width on robustness is discussed in Appendix F.1.
How does the parametrization of robust training impact layer-wise robustness? As expected, layers extracted from robustly trained network are more robust than their benign counterparts for corresponding values of . We find a significant difference between PGD-based adversarial training and TRADES in terms of the robustness of their first linear layers (Fig. 2), but this largely disappears by the second ReLU activation layer (Fig. 5). Interestingly, we observe a phenomenon where layers of a network robustly trained using higher values of train can be less robust than those using a lower value, when the value of at which evaluation is performed is less than the higher train.
Impact of linear convolutional layers on robustness: As discussed in Section 3.3, the first convolutional layer can be thought as simply a matrix multiplication, although the size of the resulting matrix can be large. We find that since the effective dimension of the resulting features is much larger than the input dimension for the datasets we consider, the linear convolutional layer does not lead to an increase in the lower bound (Fig. 11 in Appendix).
5 RELATED WORK AND DISCUSSION
5.1 RELATED WORK
Lower bounds on robustness: For cases when the data distribution is specified, Dohmatob (2019) and Mahloujifar et al. (2019) use the ‘blowup’ property to determine bounds on the robust loss, given
Certification and Verification: Work on certified robustness has considered techniques for training neural networks such that the resulting models are provably robust to perturbations upper bounded by a given budget (Kolter & Wong, 2018; Raghunathan et al., 2018; Cohen et al., 2019; Li et al., 2020). Typically, these models can only be certified to be robust to small budgets. In contrast, our work provides lower bounds on the robustness of partial models which are applicable across a large range of budgets. Approaches to verifying the robustness of neural networks (Bunel et al., 2017; Tjeng et al., 2019; Gowal et al., 2018) are closely related to our work, but differ in that they consider fixed end-to-end networks while we focus on a layer-by-layer analysis, allowing us to argue about the robustness of classifiers trained on top of given feature extractors.
5.2 DISCUSSION
Implications for training robust models: Our results indicate that layers in the network that reduce the effective dimension of their incoming inputs have the largest negative impact on robustness (see further results in Appendix F.1). Two prominent examples of this are ReLU layers that reduce effective dimension by only considering non-negative outputs and fully connected layers with fewer outputs than inputs. On the other hand, linear convolutional layers do not have a negative impact on robustness. This indicates that not reducing the effective dimension of any deeper feature to be lower than that of the input data is likely to benefit robustness. Further, our results confirm that the use of larger values of train does not necessarily translate to higher robustness at lower budgets. This points towards the need for algorithms that can effectively train a network to be robust to more than a single adversary at a time. Finally, we find a qualitative difference in the layers learned using PGD-training and TRADES, implying interesting learning dynamics with different robust losses.
Extending to state-of-the-art models and datasets: All of our experiments in this paper are on simple models and datasets which demonstrate the feasibility and use of our method. However, state-of-the art feature extractors for datasets such as Imagenet are far deeper than those considered in this paper. Thus, our algorithm would need to be made considerably faster to handle these cases. While our framework, can handle skip connections, networks with attention are beyond its scope. Nevertheless, the feature extractors we consider are robust for the tasks we evaluate them on, making our results and conclusions representative.
A PROOFS
Proof of Lemma 1. Suppose that e = ((u, 1), (v,−1)) ∈ E . Then, there are some z̃ ∈ Z , ũ ∈ N(u), and ṽ ∈ N(v) such that f(ũ) = f(ṽ) = z̃. We have qu ≤ h(f(ũ))1 = h(z̃)1, qv ≤ h(f(ṽ))−1 = h(z̃)−1, and h(z̃)1 + h(z̃)−1 = 1. Combining these gives the constraint (Bq)e ≤ 1, which appears in (1).
Now, we will show that each vector q in the polytope is achievable by some h. The construction is simple: at each point in the feature space, assign the largest possible probability to class 1: let h(z̃)1 = supw:z̃∈f(N(w)) qw and h(z̃)−1 = 1 − h(z̃)1. This achieves the desired performance for examples from class 1:
inf ũ∈N(u) h(f(ũ))1 = inf ũ∈N(u) sup w:f(ũ)∈f(N(w)) qw ≥ inf ũ∈N(u) qu = qu.
For an example is v in class −1 we have
inf ṽ∈N(v) h(f(ṽ))−1 = inf ṽ∈N(v)
( 1− sup
w:f(ṽ)∈f(N(w)) qw ) = inf
ṽ∈N(v) inf w:f(ṽ)∈f(N(w)) (1− qw)
= inf w:((w,1),(v,−1))∈E
(1− qw)
≥ qv. The final inequality uses the fact that q satisfies Bq ≤ 1.
B INDUCED DISTANCE IS NOT A DISTANCE ON Z
The fact ε∗f is defined on X is essential. It is not possible to interpret the induced distance as a distance in the feature space Z . This is because f(Nε(x)), the set of features of points near x, cannot be derived from f(N0(x)), the set of features of the uncorrupted version of x. This is because distinct choices of x may lead to the same features f(N0(x)), but f may vary more in the neighborhood of one choice of x that the other.
An example is helpful to illustrate this point. Let X = X̃ = R2, let Z = R, let Nε(x) be the closed `2 ball of radius ε around x, and let f(x0, x1) = arctan(x1/x0) for x 6= 0 (the value that we pick for f(0, x1) is irrelevant). In other words, f finds the angle between the horizontal axis and the line containing x. The range of values that the adversary can cause f(x̃) to take depends on ‖x‖. If ‖x‖2 < ε, f(x̃) can be any angle from 0 to π, and if ‖x‖2 ≥ ε, |f(x̃)− f(x)| ≤ arcsin(ε/‖x‖2). This example also illustrates that εf is not necessarily a metric, even in cases where d is a metric. Given u, v ∈ R2, we have εf (u, αu) = 0, εf (u, αu) = 0, and εf (αu, αv) can be made arbitrarily small by selecting α to be small. Despite this, ε(u, v) will be on the order of min(‖u‖, ‖v‖). An alternative approach to studying the induced distance is to define an adversarial classification problem on Z by taking NZε (z) = f(Nε(f−1({z}))). Note that this construction only makes sense when X = X̃ and thus the domain of f is X . This is more conservative: for any feature point z it considers the worst-case x with feature z: the x in whose neighborhood f varies the most.
C DESCENT ALGORITHM FOR MULTIPLE LAYER NETWORKS
We start by repeating the notion used in Section 3.2: f = f (`) ◦ . . . ◦ f (1) where the layer i function is f (i) : Rni → Rni+1 , f (i)(z) = (k(i) + L(i)z)+. Then εf (u, v) is the value of the following optimization problem:
min ε subject to δ ∈ εB, δ′ ∈ εB, (f (`) ◦ . . . ◦ f (1))(u+ δ) = (f (`) ◦ . . . ◦ f (1))(v + δ′). Using the local linear approximation, the equality constraint becomes
diag(s(`))(k(i) + L(`)(. . . diag(s(1))(k(1) + L(1)(u+ δ))))
= diag(s′(`))(k(i) + L(`)(. . . diag(s′(1))(k(1) + L(1))(v + δ′)))).
and for each 1 ≤ i ≤ `, the inequalities
(2 diag(s(i))− I)(k(i) + L(i)(. . . diag(s(1))(k(1) + L(1)(u+ δ)))) ≥ 0 (2 diag(s′(i))− I)(k(i) + L(i)(. . . diag(s′(1))(k(1) + L(1)(v + δ′)))) ≥ 0
must hold in order for the linear approximation to be valid. Any collision must be in a polytope with s(`) = s′(`), which is why we only needed one set of s variables in the single layer case. However, ReLU activation variables for earlier layers cannot be merged.
The main modification to algorithm is to the process of changing the s variables after each iteration. The variables for all but the final layer are allowed to change independently, while the variables in the final layer are changed together following the same rule as in the single layer algorithm.
Algorithm 2 Descent with midpoint initialization Input: u, v ∈ Rn1 , L ∈ Rn2×n1 , k ∈ Rn2 Output: ε ∈ R, δ, δ′ ∈ Rn1
1: z ← k + 12L(u+ v), ε← 1 2‖u− v‖ 2: for 0 ≤ j < n2 do 3: sj ← 1(zj > 0), 4: end for 5: repeat 6: εold ← ε 7: (ε, δ, δ′, z, z′)← ConeLP(L(1) . . . L(`), k(1) . . . k(`), u, v, s(1) . . . s(`), s′(1) . . . s′(`−1)) 8: sold ← s 9: for 1 ≤ i ≤ `− 1 do
10: for 0 ≤ j < ni+1 do 11: if z(i)j > 0 then 12: s(i)j ← 1− s (i) j 13: end if 14: if z′(i)j > 0 then 15: s′(i)j ← 1− s ′(i) j 16: end if 17: end for 18: end for 19: for 0 ≤ j < n`+1 do 20: if z(`)j > 0 and z ′(`) j > 0 then 21: s(`)j ← 1− s (`) j 22: end if 23: end for 24: until sold = s or εold = ε
D CONVERTING COLLISION FINDING TO A LINEAR CONE PROGRAM
In this section, we demonstrate how the collision finding problem after one linear and one ReLU layer can be cast as a linear cone program.
We have a network with first layer v 7→ Lv + k where L ∈ Rn1×n0 and k ∈ Rn1 . Given a pair of points (v′, v′′) ∈ R2n0 we would like to search over the space of (δ′, δ′′) ∈ R2n0 such that (L(v′ + δ′)− k)+ = (L(v′′ + δ′′)− k)+. This space is always nonempty because we can take v′ + δ′ = v′′ + δ′′ = (v′ + v′′)/2.
Let S ⊆ [n1] be the subset of active ReLUs. Let F ∈ R|S|×n1 be the inclusion indicator matrix for the subset: Fi,j = 1 if and only if j ∈ S and |S ∩ [j]| = i (there are exactly i elements of S strictly smaller than j, so j is the i + 1st element of S). Let D be the diagonal matrix with Dj,j = 1 for j ∈ S and Dj,j = −1 for j 6∈ S.
For j ∈ S (the active ReLUs), we need the constraint (L(v′ + δ′) + k)j = (L(v′ + δ′) + k)j . In matrix form, this is FL(v′ + δ′) = FL(v′′ + δ′′),
For j ∈ S, we need (L(v′ + δ′) + k)j ≥ 0, (L(v′′ + δ′′) + k)j ≥ 0, and for j 6∈ S, we need (L(v′ + δ′) + k)j ≤ 0, (L(v′′ + δ′′) + k)j ≤ 0. In matrix form, these are D(L(v′ + δ′) + k) ≥ 0, D(L(v′′ + δ′′) + k) ≥ 0. Our objective is max(‖δ′‖2, ‖δ′′‖2). We will replace with a linear objective by adding two additional variables (t′, t′′) satisfying t′ ≥ ‖δ′‖2 and t′′ ≥ ‖δ′′‖2. The cvx-opt system is
minimize cTx subject to Gx+ s = h
Ax = b
s 0
The cones involved are a nonnegative orthant of dimension 2n1 and two second order cones, each of dimension n0 + 1. Thus s = (D(L(v′ + δ′) + k), D(L(v′′ + δ′′) + k), t′, δ′, t′′, δ′′). We then let x = (t, δ′, δ′′). The matrix relating s and x is G ∈ R(n1+n1+1+n0+1+n0)×(1+n0+n0) and s = h−Gx. The block structure is
G = 0 −DL 0 0 0 −DL −1 0 0 0 −I 0 −1 0 0 0 0 −I h = D(Lv′ + k) D(Lv′′ + k) 0 0 0 0 We use Ax = b to encode FL(δ′ − δ′′) = −FL(v′ − v′′), so A ∈ R|S|×(1+n0+n0), b ∈ R|S|, and
A = (0 FL −FL) b = −FL(v′ − v′′).
Finally,
c = ( 1 0 0 ) .
D.1 FACTORIZED L
If n1 < n0, then we can take advantage of the fact that the rank of L is at most n1. Let L = UΣV T .
Replace L(v + δ) with UΣ(V T v + ), n0 with size of Σ.
New G and A:
G = 0 −DUΣ 0 0 0 −DUΣ −1 0 0 0 −I 0 −1 0 0 0 0 −I A = (0 FUΣ −FUΣ) Also, the length of c is reduced. The constant vectors h and b are unchanged.
E FURTHER EXPERIMENTAL DETAILS
We consider the following two architectures for our experiments:
1. 3-layer fully connected neural network (FCNN): 300 FC-ReLU-200 FC-ReLU-2 FC 2. 4-layer convolutional neural network (CNN): 20×5×5 conv.-BN-ReLU- 2×2 MaxPool-
50×5×5 conv.-BN-ReLU- 2×2 MaxPool- 500 FC - 2 FC
We also construct a ‘Small’ and ‘Smaller’ version of the FCNN with layers that are 2× and 4× narrower respectively.
F ADDITIONAL RESULTS
F.1 IMPACT OF REDUCTION OF NETWORK SIZE
In Figure 6, we compare the robustness of representations obtained from fully connected networks with decreasing layer sizes. The ‘Regular’ network is the one used throughout, while the ‘Small’ and ‘Smaller’ networks have corresponding layers that are 2× and 4× narrower respectively. We can clearly see that as the width of the feature extractor decreases, so does its robustness.
F.2 0− 1 LOSS RESULTS
In Figures 7, 8, 9 and 10 we provide lower bounds on the 0 − 1 loss in the same settings as those considered in the main body of the paper. We note that the results and conclusions remain the same qualitatively.
F.3 LINEAR CONVOLUTIONAL LAYERS
In Figure 11, we can see that the representations extracted from the first linear layer of a convolutional network do not have any negative impact on the robustness of the overall model. | 1. What is the focus of the paper regarding feature extractors?
2. What are the strengths and weaknesses of the proposed algorithm in terms of scalability and applicability to various datasets and models?
3. Do you have any concerns regarding the conclusions drawn from the analysis of layer properties and their impact on robustness?
4. Are there any limitations or potential biases in the experimental results that need to be addressed?
5. How does the paper contribute to the understanding and improvement of neural network robustness? | Summary Of The Paper
Review | Summary Of The Paper
This paper analyzes the robustness of fixed feature extractors (e.g. all but the last layer of a pre-trained network). They do so by attempting to find pairs of inputs that have collisions (at some later layer of the network). By analyzing these collisions in both simpler cases and in more general cases, the authors argue that they can also pinpoint what properties of certain layers contribute to lack-of-robustness, including dimensionality reducing layers, different adversaries at train/test, and ReLUs.
Review
The paper begins by analyzing linear feature extractors, which is a simpler case that can be analyzed theoretically in full, but is not directly applicable to the non-linear case. For the non-linear case, the paper uses gradient descent to try to find collisions, which is reasonable; the authors may want to include a citation to [1], which has also previously tried to find collisions in representations via a different gradient descent algorithm on inputs.
The main algorithm (Algorithm 1 in the paper) appears reasonable, but I have some questions about it. In particular, in line 10, it appears to cause branching for every ReLU. How does this algorithm scale with the number of ReLUs? Will it scale exponentially? If so, it is worth mentioning, as this is often a limitation of related methods that involving branching for every ReLU. For example, exact certification of neural network robustness is shown to be NP-hard in [2].
The experimental results in this work are primarily focused on very basic datasets (MNIST and Fashion-MNIST), so it is unclear how results will generalize to larger datasets (CIFAR, ImageNet). The networks used are also usually simple 3 layer networks; I think the paper could be made more convincing with either (1) experiments on more complex datasets/models and/or (2) discussion/experiments showing how this approach scales as model size/dataset complexity increases. For example, how does the proposed algorithm scale for deeper networks? Could it feasibly be scaled up to find collisions for ResNets or deeper ConvNets?
Finally, the authors draw conclusions regarding how various layers contribute to (lack-of) robustness. I think this section is very interesting, but I think deeper investigation is necessary to draw such conclusions. In particular, the authors focus on how their robustness loss lower bound changes depending on the structure of each layer, but this robustness lower bound is merely a proxy for true robustness. Thus, concluding that certain properties of each layer affect true robustness is too strong, and should be qualified further. In particular, the implications for how to train robust models section is interesting, and it may be worthwhile to try to find supportive experimental evidence to see whether insights derived from analyzing the robustness lower bound help.
Small Comments and Questions:
You repeat “are fixed” before and after the parentheses in the 4th-to-last line of page 1
In Figure 3a/b, the blue line is missing
You mention that “convolutional networks are more robust than fully connected ones” is “commonly held wisdom” - is this actually true? Usually, adversarial attacks are capable of completely breaking both types of networks.
[1] https://arxiv.org/abs/1906.00945 Adversarial Robustness as a Prior for Learned Representations (Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Aleksander Madry)
[2] https://arxiv.org/abs/1702.01135 Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks Guy Katz, Clark Barrett, David Dill, Kyle Julian, Mykel Kochenderfer
=========================================================================
I thank the authors for the rebuttal. After reading the rebuttal and other reviewer comments, I have decided to maintain my score of "weak reject". As Reviewer r9vz noted, a lot of the most interesting work is left as "future work" at the moment; it is still unclear to me whether this approach will scale effectively to more complex datasets (CIFAR/ImageNet vs. MNIST/Fashion-MNIST) or more standard and modern architectures (CNNs, resnets vs. 3-layer fully-connected networks). |
ICLR | Title
Lower Bounds on the Robustness of Fixed Feature Extractors to Test-time Adversaries
Abstract
Understanding the robustness of machine learning models to adversarial examples generated by test-time adversaries is a problem of great interest. Recent theoretical work has derived lower bounds on how robust any model can be, when a data distribution and attacker constraints are specified. However, these bounds only apply to arbitrary classification functions and do not account for specific architectures and models used in practice, such as neural networks. In this paper, we develop a methodology to analyze the robustness of fixed feature extractors, which in turn provides bounds on the robustness of any classifier trained on top of it. The tightness of these bounds relies on the effectiveness of the method used to find collisions between pairs of perturbed examples at deeper layers. For linear feature extractors, we provide closed-form expressions for collision finding while for arbitrary feature extractors, we propose a bespoke algorithm based on the iterative solution of a convex program that provably finds collisions. We utilize our bounds to identify the layers of robustly trained models that contribute the most to a lack of robustness, as well as compare the same layer across different training methods to provide a quantitative comparison of their relative robustness.
1 INTRODUCTION
The robustness of machine learning models to test-time (evasion) attacks, particularly via adversarial examples, has been studied extensively from an empirical perspective (Croce et al., 2020; Papernot et al., 2016; Li et al., 2020; Biggio & Roli, 2017). A plethora of attacks (Szegedy et al., 2013; Carlini & Wagner, 2017; Bhagoji et al., 2018; Croce & Hein, 2020) and defenses (Madry et al., 2018; Zhang et al., 2019) has been proposed. The resulting arms race between proposed attacks and defenses, has led recent theoretical work (Dohmatob, 2019; Cullina et al., 2018; Mahloujifar et al., 2019; Bhagoji et al., 2019; 2021; Montasser et al., 2019) to focus on understanding the fundamental bounds of learning in the presence of adversarially perturbed inputs, in terms of both the existence of robust classifiers and the sample complexity required to learn them.
A line of research on this front has investigated lower bounds on the robustness attainable by any classifier when classifying data from distributions modified by adversarial perturbations (Pydi & Jog, 2020; Bhagoji et al., 2019; 2021). In particular, information-theoretic lower bounds on the minimum possible robust 0 − 1 and cross-entropy losses have been derived for arbitrary discrete distributions and very general attack models. However, the bounds from these papers are only applicable when considering the set of possible classifiers to be all measurable functions, which does not translate to practice. The classifier architecture and even some portion of the parameters are fixed (such as in transfer learning) in practical approaches to training machine learning models.
In this paper, we leverage this line of work to find lower bounds on the robustness of commonly used, fixed feature extractors. This provides a principled method to compare the robustness of feature extractors obtained by different training methods to arbitrary test-time adversaries. The key question we answer is:
What is the minimum robust loss incurred by any classifier trained on top of a fixed feature extractor?
Two challenges for training robust machine learning models motivate this question. First, since transfer learning is now common practice, it is incumbent upon the model trainer to choose a robust pre-trained feature extractor. Our work offers a principled and quantitative method to choose among
feature extractors obtained from different layers and training methods. Second, we are able to shed light on the impact of common architectural choices, such as activation functions and dimensionality reduction on robustness. This arises from our study of how the lower bound evolves from layer to layer within deep neural networks.
Importance of lower bounds on robustness: To determine classifier-agnostic lower bounds on robustness, earlier work focused on the interaction between points from different classes in the input space when perturbed through the construction of a conflict graph. Minimizing an appropriately defined loss function over this graph determines an information-theoretic lower bound on the loss. Intuitively, the denser the graph is, the higher the lower bound. These classifier-agnostic bounds are able to determine the plausibility of robust classification for a given adversary and dataset, and highlight practically relevant perturbation budget.
1.1 CONTRIBUTIONS
Lower bounds on robustness for fixed feature extractors: We significantly extend prior work by deriving lower bounds that depend on the architecture and weights of a fixed feature extractors. We construct a distance function over the input space that depends on the feature extractor in use. Our results have implications for both analyzing the robustness of already trained classifiers as well as determining feature extractors to use for applications such as transfer learning.
Bespoke algorithms for collision finding: For linear feature extractors such as the first layer of a neural network, we determine exact, closed form expressions to find collisions. Collision finding for non-linear feature extractors is a non-convex optimization problem, for which we propose a custom algorithm. We approach the collision-finding problem with a descent algorithm that solves a sequence of convex optimization problems over polytopes. This algorithm has the structural advantage of maintaining feasibility (for some adversarial budget constraint) at each step. Despite the fact that it does not find a global minimum, the solutions found are useful for obtaining not just approximations but actual lower bounds on optimal adversarial classification loss.
Empirical findings: We utilize our method to find numerical lower bounds on the robustness of fixed feature extractors trained on two different datasets with both fully-connected and convolutional architectures. Our results indicate that the use of dimensionality-reducing linear layers as well as the effective compression induced by ReLU activations lead to significantly less robust networks. Further, we confirm the oft-cited observation that mismatches between the adversary’s budget at train and test time impacts robustness negatively. Taken together, these findings point towards future design considerations for robust models.
2 DERIVING LOWER BOUNDS FOR FIXED FEATURE EXTRACTORS
In this section, we develop a method for evaluating the robustness of a feature extractor for classification in the presence of a test-time adversary. We characterize the optimal adversarial loss achievable by any classifier that uses the fixed feature extractor as its initial layer. This characterization is based on a conflict graph derived from a discrete data distribution and the constraints of the adversary.
Examples and labels: We consider a supervised classification problem with a test-time adversary. We have an example space X and a label space Y = {−1, 1}. Labeled examples are sampled from a joint probability distribution P over X × Y . Test-time adversary: The test-time adversary modifies a natural example subject to some constraints to generate an adversarial example that will be classified Goodfellow et al. (2015); Szegedy et al. (2013); Carlini & Wagner (2017). Formally, the adversary samples a labeled example (x, y) from P and selects x̃ ∈ N(x), where N : X → 2X̃ is the neighborhood function encoding the constraints on the adversary and X̃ is the space of adversarial examples.1 For all x, N(x) must be nonempty. This definition encompasses the `p family of constraints widely used in previous work.
1In most (but not all) settings that have previously been studied, X̃ = X . We believe that making the distinction helps to clarify some of our definitions: their applicability in this general context affects what properties they can be expected to have.
Measuring adversarial loss: We consider ‘soft’ classification functions (or classifiers) that map examples to probability distributions over the classes. These are h : X → ∆Y , where ∆Y = {p ∈ RY : p ≥ 0, ∑ y∈Y py = 1}. We measure classification performance with a loss function ` : ∆Y × Y → R, so the expected performance of a classifier h is E[supx̃∈N(x) `(h(x̃), y)], where (x, y) ∼ P . In the two class setting, h(x̃)−1 = 1−h(x̃)1, so any loss function that treats the classes symmetrically can be expressed as `(p, y) = `′(py). Additionally, `′ should be decreasing. Together, these allow the optimization over adversarial examples to be moved inside the loss function, giving E [ ` ( inf x̃∈N(x) h(x̃)y, y )] . Thus, the adversarial optimization can be analyzed by itself.
Definition 1. For a soft classifier h, the correct-classification probability qv that it achieves on an example v = (x, y) in the presence of an adversary is qv = inf x̃∈N(x) h(x̃)y . The space of achievable correct classification probabilities is
PV,N,H = ⋃ h∈H { q ∈ RV : ∀(x, y) ∈ V, 0 ≤ q(x,y) ≤ inf x̃∈N(x) h(x̃)y } .
Bhagoji et al. (2021) characterized PV,N,H in the case that H is the class of measurable functions X → ∆Y . For a data distribution P that is discrete with finite support V ⊆ X × Y , this allows the minimum adversarial loss achievable to be expressed as an optimization over PV,N,H:
inf h∈H
E(x,y)∼P [
sup x̃∈N(x)
`(h(x̃), y) ]
= inf q∈PV,N,H ∑ v∈V P ({v})`′(qv).
We extend their approach to analyze feature extractors as follows. Given a feature space Z and a fixed, measurable feature extractor f : X̃ → Z , define Hf = {h ∈ H : h = g ◦ f}: an element of Hf is some measureable g : Z → ∆Y composed with f . Our aim is to characterize PV,N,Hf so that we can optimize loss functions over it to evaluate the suitability of f .
Conflict graph: In Bhagoji et al. (2021), the notion of a conflict graph was used to record neighborhood intersections for pairs of points from different classes. When such an intersection exists, it is impossible for any classifier to correctly classify both of those points in the adversarial setting. We extend this notion to apply to the family of classifiers using a fixed feature extractor f . In our setting, a conflict exists between a pair of points when each of them has a neighbor that is mapped to the same point in the feature space.
We call the conflict graph GV,N,Hf , where V ⊆ X × Y . When we apply GV,N,f in order to understand classification of labeled examples with distribution P , we take V to be the support of P . The graph is bipartite: the vertex set is partitioned into parts Vc = V ∩ (X × {c}). The edge set is
EV,N,f = {((u, 1), (v,−1)) ∈ V1 × V−1 : ∃ũ ∈ N(u), ṽ ∈ N(v) such that f(ũ) = f(ṽ)}.
Lemma 1 (Feasible output probabilities). The set of correct classification probability vectors for support points V , adversarial constraint N , and hypothesis classHf is
PV,N,Hf = {q ∈ RV : q ≥ 0, q ≤ 1, Bq ≤ 1} (1)
where B ∈ RE×V is the edge incidence matrix of the conflict graph GV,N,f .
The proof is given in Appendix A and follows the structure of the proof in Bhagoji et al. (2021).
Approximating GV,N,f and PV,N,Hf : If instead of knowing the true conflict graph GV,N,f , we have some subgraph, then we can find a polytope that is a superset of the true PV,N,Hf . If we minimize some expected loss over this proxy polytope, we obtain a lower bound on the optimal loss over Hf . Because subgraphs of the conflict graph lead to valid lower bounds on optimal classification performance, we can use this method to evaluate the quality of a feature extractor f even if exact computation of the conflict graph is computationally intractable.
2.1 DISTANCE INTERPRETATION
The construction of a conflict graph and characterization of the feasible correct classification probabilities from the previous section apply to any feature extractor and neighborhood constraint. In
the most commonly studied settings, the neighborhood constraint arises from a distance function: Nε(x) = {x̃ ∈ X̃ : d(x, x̃) ≤ ε}. This parameter ε is the adversarial budget constraint. For any two examples u and v, a natural quantity to consider is the smallest adversarial budget that would cause the edge (u, v) be appear in the conflict graph:
ε∗(u, v) = inf{ε ≥ 0 : Nε(u) ∩Nε(v) 6= ∅} = inf x̃∈X̃ max(d(u, x̃), d(v, x̃)).
We will call this the distance on X induced by d. When X = X̃ = Rd and d(x, x′) = ‖x−x′‖p, the minimal adversarial budget is simply 12‖u − v‖p. Because the relationship to the distance used in the adversarial budget constraint is so simple, this quantity is not usually discussed independently. This definition generalizes easily to the setting of classification with a particular feature extractor, but the resulting quantity is much more interesting.
Definition 2. The distance induced on X by d and f is the minimum adversarial budget required to create a conflict between u and v after a feature extractor f :
ε∗f (u, v) = inf{ε ≥ 0 : f(Nε(u))∩f(Nε(v)) 6= ∅} = inf ũ,ṽ∈X̃ :f(ũ)=f(ṽ) max(d(u, ũ), d(v, ṽ)). (2)
This reduces to ε∗(u, v) when f is the identity function, or more generally any injective function. Observe that any choice of ũ and ṽ in (2) provide an upper bound on ε∗f (u, v), which is useful for finding a subgraph of GV,N,f and lower bounds on optimal classification performance inHf . Figure 1 illustrates the computation of ε∗f for a simple f that is related to the ReLU function.
Induced distance is not a distance on the feature space: The fact ε∗f is defined onX is essential: it is not possible to interpret it as a distance on Z . For a full explanation of this point, see Appendix B.
3 DISTANCE COMPUTATIONS FOR PRACTICAL FEATURE EXTRACTORS
Throughout the remainder of the paper, we will work in a more concrete setting. We assume that our example space is a real vector space and that adversarial examples are from the same space: X = X̃ = Rn1 . Let B ⊆ X be a nonempty, closed, convex, origin-symmetric set. These conditions imply the zero vector in contained in B. We take the neighborhood of any point to be a scaled, shifted version of this ball: Nε(x) : x + εB. Neighborhood constraints derived from `p norms fit into this class: we encode them by defining B = {δ ∈ Rn1 : ‖δ‖p ≤ 1}, which results in Nε(x) = {x̃ ∈ Rn1 : ‖x̃− x‖p ≤ ε}. We will focus on `2-based neighborhoods, but our algorithms could be adapted to any choice of B over which efficient optimization is possible.
3.1 LINEAR FEATURE EXTRACTORS
Suppose that our feature extractor is an affine linear function of the input example: f(x) = Lx+ k for some matrix L ∈ Rn2×n1 and vector k ∈ Rn2 . Then the distance εf (u, v) becomes
inf{ε ≥ 0 : {k + L(u+ δ) : δ ∈ εB} ∩ {k + L(u+ δ′) : δ′ ∈ εB} 6= ∅}.
Because {(ε, δ) ∈ R1+n1 : δ ∈ εB} is closed convex cone, εf (u, v) can be expressed as the following convex optimization problem:
inf ε subject to δ ∈ εB, δ′ ∈ εB, L(δ − δ′) = L(v − u). Because B is closed and the linear subspace is always nonempty, the infimum is achieved. Also, it is sufficient to consider (δ, δ′) satisfying δ′ = −δ: for any feasible (ε, δ, δ′), the point (ε, (δ − δ′)/2, (δ′ − δ)/2) is also feasible, has the same value, and satisfies the additional constraint. The feasibility of the symmetrized point uses the origin-symmetry of B. The simplified program is
min ε subject to δ ∈ εB, Lδ = L(v − u)/2. Thus the optimal adversarial strategy for creating conflict between u and v is intuitive: produce examples with the same features as the midpoint (u+ v)/2.
If B is the unit `2 ball, there are further simplifications. Consider the singular value decomposition L = UΣV T where we do not include zero singular values in Σ. Then the linear map given by UΣ is injective and can be canceled from the linear constraint on δ. The resulting program is
min ε subject to ‖δ‖2 ≤ ε, V T δ = 1
2 V T (v − u),
the optimal choice of δ is δ = 12V V T (v − u), and εf (u, v) = 12‖V T (v − u)‖2. Observe that this is the norm of a vector in Rn2 , i.e. the feature space, contrasting with our discussion in Section B. However, this is essentially the only case εf (u, v) simplifies into a feature space distance.
3.2 FULLY CONNECTED NEURAL NETWORKS WITH RELU ACTIVATIONS
For this feature extractor architecture, we have f = f (`) ◦ . . . ◦ f (1) where the layer i function is f (i) : Rni → Rni+1 , f (i)(z) = (k(i) + L(i)z)+. Here z+ represents the component-wise positive part of the vector z. Then εf (u, v) is the value of the following optimization problem:
min ε subject to δ ∈ εB, δ′ ∈ εB, (f (`) ◦ . . . ◦ f (1))(u+ δ) = (f (`) ◦ . . . ◦ f (1))(v + δ′). As in the linear case, the minimum exists because the cones are closed and the equality constraint is feasible. In contrast with the linear case, the equality constraint is nonconvex.
The local linear approximation to f (i)(z) around a point z′ is diag(s(i,z ′))(k(i)+L(i)z), where s(i,z ′) is a zero-one vector that depends on the sign pattern of k(i) +L(i)z′: s(i,z ′)
j = 1((k (i) +L(i)z′)j >
0). In other words, s(i,z ′) is the ReLU activation pattern at layer i when z′ is the input to that layer.
Using these linear approximations, the feasible set of (δ, δ′) ∈ Rn1+n1 satisfying the constraint f(u + δ) = f(v + δ′) can be decomposed as a union of polytopes: each activation pattern defines a linear subspace and there are some linear inequalities specifying the region where that activation pattern actually occurs. For a one-layer network, the linear piece for pattern s is
f(x) = diag(s)(k + Lz) for diag(2s− 1)(k + Lz) ≥ 0. Thus one of the polytopes composing the feasible set is (δ, δ′) ∈ Rn1+n1 satisfying
diag(s)(k + L(u+ δ)) = diag(s′)(k + L(v + δ′)),
(2 diag(s)− I)(k + L(u+ δ)) ≥ 0, (2 diag(s′)− I)(k + L(v + δ′)) ≥ 0.
In the one-layer case, the whole feasible region is covered by polytopes where s = s′. Observe that the dimension of the subspace satisfying the linear equality varies with s. When f contains multiple layers, each polytope in the feasible set is defined by a linear equality constraint involving feature vectors together with a linear inequality for the sign of each ReLU input.
We optimize over this feasible set with a descent algorithm: Algorithm 2 details the version for single-layer networks. The method generalizes to multiple layers and is used to obtain our results in Section 4.2). A full description of the general version is in Appendix C. We initialize our search in a polytope that we know to be nonempty because it contains the feature space collision induced by the midpoint of the two examples. Within each polytope, we can minimize the objective exactly by solving a linear cone program. We examine the dual variables associated with the linear inequality constraints to determine whether an adjacent polytope exists in which we can continue the search.
In the one layer case, the cone program that we solve, which depends on (L, k, u, v, s), is
min ε subject to (ε, δ, δ′) ∈ R1+n1+n1 , δ ∈ εB, δ′ ∈ εB, diag(s)L(u+ δ) = diag(s)L(v + δ′),
(2 diag(s)− I)(k + L(u+ δ)) ≥ 0, (2 diag(s)− I)(k + L(v + δ′)) ≥ 0.
Algorithm 1 Descent with midpoint initialization Input: u, v ∈ Rn1 , L ∈ Rn2×n1 , k ∈ Rn2 Output: ε ∈ R, δ, δ′ ∈ Rn1
1: z ← k + 12L(u+ v), ε← 1 2‖u− v‖ 2: for 0 ≤ j < n2 do 3: sj ← 1(zj > 0), 4: end for 5: repeat 6: εold ← ε 7: (ε, δ, δ′, z, z′)← ConeLP(L, k, u, v, s) 8: sold ← s 9: for 0 ≤ j < n2 do
10: if zj > 0 and z′j > 0 then 11: sj ← 1− sj 12: end if 13: end for 14: until sold = s or εold = ε
We need not just the values of the primal solution, but the values of dual variables associated with the linear inequalities in the solution to the dual problem. These are called z and z′ in Algorithm 2. When both zj > 0 and z′j > 0, the input to ReLU j is zero when both f(u+δ) and f(v+δ′) are computed, and the objective function could be decreased further by allowing the input to switch signs. In this case, we move to another polytope based on the new ReLU states and search there.
When we fail to find such pairs of dual variables, either the minimum is in the interior of the current polytope (and thus is supported only by cone constraints), or the minimum is on the boundary of the current polytope but there is no adjacent polytope in which we could continue the search. Thus we end the search.
Algorithm termination, convergence, and complexity: The descent algorithm will terminate in a finite number of iterations and will find a local minimum of the distance function. Since the feasible space is non-convex, any local descent procedure is not guaranteed to find the global minimum. The number of variables in the cone program is proportional to the number of ReLUs in the feature extractor plus the dimension of the example space. The time-complexity of a single iteration of the search is polynomial in the input dimension and number of ReLUs. The feasible region is the union of a finite but exponentially large number of convex polytopes. Due to the requirement that each iteration make progress, it is impossible to ever revisit a polytope. Thus, termination is guaranteed, but no polynomial bound on the number of iterations is available. We suspect that as in the case of the simplex algorithm for linear programming, input data resulting in an extremely large number of iterations may exist, but would have a delicate structure that is unlikely to arise in practice.
3.3 FURTHER COMMON LAYERS
Convolution: It is straightforward to represent a convolutional layer as a linear layer by constructing the appropriate Topelitz matrix, and thus our method applies directly.
Batch-normalization: Batch norm layers are simply affine functions of their inputs at test-time, and the transformations they induce can be easily included in a linear layer. Our method thus applies.
Max pooling: Max-pool layers, like ReLUs, are piecewise linear functions of their inputs, so constraints coming from the equality of max-pool outputs also lead to feasible regions that are the union of polytopes. This is a simple extension left for future work due to a lack of space.
Other activation functions and architectures: Injective activation functions such as Leaky ReLU, ELU and sigmoid will not lead to additional collisions. Further, since they are not piecewise, a
different descent algorithm would be needed. We note that our framework cannot find collisions in networks with both forward and backward flows, such as those with attention.
4 EVALUATION
In this section, we demonstrate how the methodology outlined in the previous sections for determining the robustness of a fixed feature extractor can be used in practice. Our detailed results provide insights on how the overall robustness of a neural network evolves through its different layers and is impacted by the training procedure used.
4.1 FROM COLLISION-FINDING TO LOWER BOUNDS
We find lower bounds on robust loss over the training set by minimizing a loss function overGV,N,f .
Vertices V from training data: The vertex set V is a representation of the training data. We use 2-class problems derived from MNIST (LeCun & Cortes, 1998) or Fashion MNIST (Xiao et al., 2017) as in previous work (Pydi & Jog, 2020; Bhagoji et al., 2019; 2021).
Neighborhood function N : We use the common `2-norm ball constraint (Madry et al., 2018), in which the adversary’s strength is parametrized by the radius of the ball 2.
Fixed feature extractors f : We use the composition of the layers of convolutional and fullyconnected DNNs as our fixed feature extractors (details in Appendix E). The first layer of any of these networks (before a non-linear activation is applied) behaves as a linear feature extractor. Subsequent layers behave as non-linear feature extractors. These networks are trained using one of standard cross-entropy loss minimization or robust training with either PGD-based adversarial training (Madry et al., 2018) or the TRADES loss (Zhang et al., 2019).
Edge set E from collision finding: Algorithm 2 provides a greedy, iterative method to find collisions between feature representations of pairs of points from different classes. Each successful collision is an edge in the bipartite conflict graph. Each iteration of this algorithm can be cast as a convex program in the form of a linear cone (as detailed in Appendix D). We solve this linear cone program using CVXOPT (Andersen et al., 2013). Since we are working with an `2-norm constrained adversary, we can speed up computation by projecting the inputs onto the space spanned by the right singular vectors of the first linear layer. For linear convolutional layers, we cast the convolution operation as a matrix multiplication to enable the use of the closed form derived in Section 3.1. We also experimented with a modification of the Auto-PGD (Croce & Hein, 2020) algorithm to find collisions at deeper layers with fixed budgets as done in Engstrom et al. (2019). However, this was less effective and more expensive at finding collisions, so all further results use Algorithm 2.
Computing the lower bound from the conflict graph: The conflict graph determines the set of possible output probabilities for each vertex (training data point) for the optimal classifier. Minimizing the 0 − 1 and cross-entropy losses over this graph thus results in a lower bound over these losses since any classifier must incur a larger loss than the optimal classifier, by definition. We use the method from Bhagoji et al. (2021) over the conflict graphs we derive from deeper layer representations. Results in the main body are for the cross-entropy loss (others in Appendix F).
4.2 RESULTS
Interpreting lower bound values: The lower bound on the input space representations is computed by considering all possible classifiers and is thus the best possible, while bounds for other representations are computed by restricting the space of possible classifiers. Intuitively, the latter will always be larger than or equal to the former. The key metric to understanding the robustness of feature extractors is to check how much larger the corresponding lower bounds are. The magnitude of this difference measures the contribution of that layer to the overall network’s lack of robustness.
Robustness of linear layer representations: Using the procedure outlined in Section 3.1, we find collisions after the first linear layer and construct a conflict graph for several models (Fig. 2) for
2We are aware of the critiques of this constraint (Gilmer et al., 2018) and only use it to compare with previous work.
How does robustness evolve across layers? Having looked at the how robust linear layer representations are, it is pertinent to consider other representations derived from other layers deeper in the network. Of particular interest are post-ReLU representations for models with benign (Fig. 3) and robust training (Fig. 4). We find that the first ReLU activation layer contributes significantly to an increase in the lower bound for both benign and robustly trained networks. We observe that postReLU representations tend to be sparse, leading to an easier search problem and a larger number of collisions. The second linear layer, on the other hand, does not lead to much additional increase in the loss. This is a property of the particular architecture we use, and a smaller linear layer is likely to lead to larger increases in loss. Finally, the second set of ReLU activations does have a measurable impact on robustness, particularly for the benign network. The impact of layer width on robustness is discussed in Appendix F.1.
How does the parametrization of robust training impact layer-wise robustness? As expected, layers extracted from robustly trained network are more robust than their benign counterparts for corresponding values of . We find a significant difference between PGD-based adversarial training and TRADES in terms of the robustness of their first linear layers (Fig. 2), but this largely disappears by the second ReLU activation layer (Fig. 5). Interestingly, we observe a phenomenon where layers of a network robustly trained using higher values of train can be less robust than those using a lower value, when the value of at which evaluation is performed is less than the higher train.
Impact of linear convolutional layers on robustness: As discussed in Section 3.3, the first convolutional layer can be thought as simply a matrix multiplication, although the size of the resulting matrix can be large. We find that since the effective dimension of the resulting features is much larger than the input dimension for the datasets we consider, the linear convolutional layer does not lead to an increase in the lower bound (Fig. 11 in Appendix).
5 RELATED WORK AND DISCUSSION
5.1 RELATED WORK
Lower bounds on robustness: For cases when the data distribution is specified, Dohmatob (2019) and Mahloujifar et al. (2019) use the ‘blowup’ property to determine bounds on the robust loss, given
Certification and Verification: Work on certified robustness has considered techniques for training neural networks such that the resulting models are provably robust to perturbations upper bounded by a given budget (Kolter & Wong, 2018; Raghunathan et al., 2018; Cohen et al., 2019; Li et al., 2020). Typically, these models can only be certified to be robust to small budgets. In contrast, our work provides lower bounds on the robustness of partial models which are applicable across a large range of budgets. Approaches to verifying the robustness of neural networks (Bunel et al., 2017; Tjeng et al., 2019; Gowal et al., 2018) are closely related to our work, but differ in that they consider fixed end-to-end networks while we focus on a layer-by-layer analysis, allowing us to argue about the robustness of classifiers trained on top of given feature extractors.
5.2 DISCUSSION
Implications for training robust models: Our results indicate that layers in the network that reduce the effective dimension of their incoming inputs have the largest negative impact on robustness (see further results in Appendix F.1). Two prominent examples of this are ReLU layers that reduce effective dimension by only considering non-negative outputs and fully connected layers with fewer outputs than inputs. On the other hand, linear convolutional layers do not have a negative impact on robustness. This indicates that not reducing the effective dimension of any deeper feature to be lower than that of the input data is likely to benefit robustness. Further, our results confirm that the use of larger values of train does not necessarily translate to higher robustness at lower budgets. This points towards the need for algorithms that can effectively train a network to be robust to more than a single adversary at a time. Finally, we find a qualitative difference in the layers learned using PGD-training and TRADES, implying interesting learning dynamics with different robust losses.
Extending to state-of-the-art models and datasets: All of our experiments in this paper are on simple models and datasets which demonstrate the feasibility and use of our method. However, state-of-the art feature extractors for datasets such as Imagenet are far deeper than those considered in this paper. Thus, our algorithm would need to be made considerably faster to handle these cases. While our framework, can handle skip connections, networks with attention are beyond its scope. Nevertheless, the feature extractors we consider are robust for the tasks we evaluate them on, making our results and conclusions representative.
A PROOFS
Proof of Lemma 1. Suppose that e = ((u, 1), (v,−1)) ∈ E . Then, there are some z̃ ∈ Z , ũ ∈ N(u), and ṽ ∈ N(v) such that f(ũ) = f(ṽ) = z̃. We have qu ≤ h(f(ũ))1 = h(z̃)1, qv ≤ h(f(ṽ))−1 = h(z̃)−1, and h(z̃)1 + h(z̃)−1 = 1. Combining these gives the constraint (Bq)e ≤ 1, which appears in (1).
Now, we will show that each vector q in the polytope is achievable by some h. The construction is simple: at each point in the feature space, assign the largest possible probability to class 1: let h(z̃)1 = supw:z̃∈f(N(w)) qw and h(z̃)−1 = 1 − h(z̃)1. This achieves the desired performance for examples from class 1:
inf ũ∈N(u) h(f(ũ))1 = inf ũ∈N(u) sup w:f(ũ)∈f(N(w)) qw ≥ inf ũ∈N(u) qu = qu.
For an example is v in class −1 we have
inf ṽ∈N(v) h(f(ṽ))−1 = inf ṽ∈N(v)
( 1− sup
w:f(ṽ)∈f(N(w)) qw ) = inf
ṽ∈N(v) inf w:f(ṽ)∈f(N(w)) (1− qw)
= inf w:((w,1),(v,−1))∈E
(1− qw)
≥ qv. The final inequality uses the fact that q satisfies Bq ≤ 1.
B INDUCED DISTANCE IS NOT A DISTANCE ON Z
The fact ε∗f is defined on X is essential. It is not possible to interpret the induced distance as a distance in the feature space Z . This is because f(Nε(x)), the set of features of points near x, cannot be derived from f(N0(x)), the set of features of the uncorrupted version of x. This is because distinct choices of x may lead to the same features f(N0(x)), but f may vary more in the neighborhood of one choice of x that the other.
An example is helpful to illustrate this point. Let X = X̃ = R2, let Z = R, let Nε(x) be the closed `2 ball of radius ε around x, and let f(x0, x1) = arctan(x1/x0) for x 6= 0 (the value that we pick for f(0, x1) is irrelevant). In other words, f finds the angle between the horizontal axis and the line containing x. The range of values that the adversary can cause f(x̃) to take depends on ‖x‖. If ‖x‖2 < ε, f(x̃) can be any angle from 0 to π, and if ‖x‖2 ≥ ε, |f(x̃)− f(x)| ≤ arcsin(ε/‖x‖2). This example also illustrates that εf is not necessarily a metric, even in cases where d is a metric. Given u, v ∈ R2, we have εf (u, αu) = 0, εf (u, αu) = 0, and εf (αu, αv) can be made arbitrarily small by selecting α to be small. Despite this, ε(u, v) will be on the order of min(‖u‖, ‖v‖). An alternative approach to studying the induced distance is to define an adversarial classification problem on Z by taking NZε (z) = f(Nε(f−1({z}))). Note that this construction only makes sense when X = X̃ and thus the domain of f is X . This is more conservative: for any feature point z it considers the worst-case x with feature z: the x in whose neighborhood f varies the most.
C DESCENT ALGORITHM FOR MULTIPLE LAYER NETWORKS
We start by repeating the notion used in Section 3.2: f = f (`) ◦ . . . ◦ f (1) where the layer i function is f (i) : Rni → Rni+1 , f (i)(z) = (k(i) + L(i)z)+. Then εf (u, v) is the value of the following optimization problem:
min ε subject to δ ∈ εB, δ′ ∈ εB, (f (`) ◦ . . . ◦ f (1))(u+ δ) = (f (`) ◦ . . . ◦ f (1))(v + δ′). Using the local linear approximation, the equality constraint becomes
diag(s(`))(k(i) + L(`)(. . . diag(s(1))(k(1) + L(1)(u+ δ))))
= diag(s′(`))(k(i) + L(`)(. . . diag(s′(1))(k(1) + L(1))(v + δ′)))).
and for each 1 ≤ i ≤ `, the inequalities
(2 diag(s(i))− I)(k(i) + L(i)(. . . diag(s(1))(k(1) + L(1)(u+ δ)))) ≥ 0 (2 diag(s′(i))− I)(k(i) + L(i)(. . . diag(s′(1))(k(1) + L(1)(v + δ′)))) ≥ 0
must hold in order for the linear approximation to be valid. Any collision must be in a polytope with s(`) = s′(`), which is why we only needed one set of s variables in the single layer case. However, ReLU activation variables for earlier layers cannot be merged.
The main modification to algorithm is to the process of changing the s variables after each iteration. The variables for all but the final layer are allowed to change independently, while the variables in the final layer are changed together following the same rule as in the single layer algorithm.
Algorithm 2 Descent with midpoint initialization Input: u, v ∈ Rn1 , L ∈ Rn2×n1 , k ∈ Rn2 Output: ε ∈ R, δ, δ′ ∈ Rn1
1: z ← k + 12L(u+ v), ε← 1 2‖u− v‖ 2: for 0 ≤ j < n2 do 3: sj ← 1(zj > 0), 4: end for 5: repeat 6: εold ← ε 7: (ε, δ, δ′, z, z′)← ConeLP(L(1) . . . L(`), k(1) . . . k(`), u, v, s(1) . . . s(`), s′(1) . . . s′(`−1)) 8: sold ← s 9: for 1 ≤ i ≤ `− 1 do
10: for 0 ≤ j < ni+1 do 11: if z(i)j > 0 then 12: s(i)j ← 1− s (i) j 13: end if 14: if z′(i)j > 0 then 15: s′(i)j ← 1− s ′(i) j 16: end if 17: end for 18: end for 19: for 0 ≤ j < n`+1 do 20: if z(`)j > 0 and z ′(`) j > 0 then 21: s(`)j ← 1− s (`) j 22: end if 23: end for 24: until sold = s or εold = ε
D CONVERTING COLLISION FINDING TO A LINEAR CONE PROGRAM
In this section, we demonstrate how the collision finding problem after one linear and one ReLU layer can be cast as a linear cone program.
We have a network with first layer v 7→ Lv + k where L ∈ Rn1×n0 and k ∈ Rn1 . Given a pair of points (v′, v′′) ∈ R2n0 we would like to search over the space of (δ′, δ′′) ∈ R2n0 such that (L(v′ + δ′)− k)+ = (L(v′′ + δ′′)− k)+. This space is always nonempty because we can take v′ + δ′ = v′′ + δ′′ = (v′ + v′′)/2.
Let S ⊆ [n1] be the subset of active ReLUs. Let F ∈ R|S|×n1 be the inclusion indicator matrix for the subset: Fi,j = 1 if and only if j ∈ S and |S ∩ [j]| = i (there are exactly i elements of S strictly smaller than j, so j is the i + 1st element of S). Let D be the diagonal matrix with Dj,j = 1 for j ∈ S and Dj,j = −1 for j 6∈ S.
For j ∈ S (the active ReLUs), we need the constraint (L(v′ + δ′) + k)j = (L(v′ + δ′) + k)j . In matrix form, this is FL(v′ + δ′) = FL(v′′ + δ′′),
For j ∈ S, we need (L(v′ + δ′) + k)j ≥ 0, (L(v′′ + δ′′) + k)j ≥ 0, and for j 6∈ S, we need (L(v′ + δ′) + k)j ≤ 0, (L(v′′ + δ′′) + k)j ≤ 0. In matrix form, these are D(L(v′ + δ′) + k) ≥ 0, D(L(v′′ + δ′′) + k) ≥ 0. Our objective is max(‖δ′‖2, ‖δ′′‖2). We will replace with a linear objective by adding two additional variables (t′, t′′) satisfying t′ ≥ ‖δ′‖2 and t′′ ≥ ‖δ′′‖2. The cvx-opt system is
minimize cTx subject to Gx+ s = h
Ax = b
s 0
The cones involved are a nonnegative orthant of dimension 2n1 and two second order cones, each of dimension n0 + 1. Thus s = (D(L(v′ + δ′) + k), D(L(v′′ + δ′′) + k), t′, δ′, t′′, δ′′). We then let x = (t, δ′, δ′′). The matrix relating s and x is G ∈ R(n1+n1+1+n0+1+n0)×(1+n0+n0) and s = h−Gx. The block structure is
G = 0 −DL 0 0 0 −DL −1 0 0 0 −I 0 −1 0 0 0 0 −I h = D(Lv′ + k) D(Lv′′ + k) 0 0 0 0 We use Ax = b to encode FL(δ′ − δ′′) = −FL(v′ − v′′), so A ∈ R|S|×(1+n0+n0), b ∈ R|S|, and
A = (0 FL −FL) b = −FL(v′ − v′′).
Finally,
c = ( 1 0 0 ) .
D.1 FACTORIZED L
If n1 < n0, then we can take advantage of the fact that the rank of L is at most n1. Let L = UΣV T .
Replace L(v + δ) with UΣ(V T v + ), n0 with size of Σ.
New G and A:
G = 0 −DUΣ 0 0 0 −DUΣ −1 0 0 0 −I 0 −1 0 0 0 0 −I A = (0 FUΣ −FUΣ) Also, the length of c is reduced. The constant vectors h and b are unchanged.
E FURTHER EXPERIMENTAL DETAILS
We consider the following two architectures for our experiments:
1. 3-layer fully connected neural network (FCNN): 300 FC-ReLU-200 FC-ReLU-2 FC 2. 4-layer convolutional neural network (CNN): 20×5×5 conv.-BN-ReLU- 2×2 MaxPool-
50×5×5 conv.-BN-ReLU- 2×2 MaxPool- 500 FC - 2 FC
We also construct a ‘Small’ and ‘Smaller’ version of the FCNN with layers that are 2× and 4× narrower respectively.
F ADDITIONAL RESULTS
F.1 IMPACT OF REDUCTION OF NETWORK SIZE
In Figure 6, we compare the robustness of representations obtained from fully connected networks with decreasing layer sizes. The ‘Regular’ network is the one used throughout, while the ‘Small’ and ‘Smaller’ networks have corresponding layers that are 2× and 4× narrower respectively. We can clearly see that as the width of the feature extractor decreases, so does its robustness.
F.2 0− 1 LOSS RESULTS
In Figures 7, 8, 9 and 10 we provide lower bounds on the 0 − 1 loss in the same settings as those considered in the main body of the paper. We note that the results and conclusions remain the same qualitatively.
F.3 LINEAR CONVOLUTIONAL LAYERS
In Figure 11, we can see that the representations extracted from the first linear layer of a convolutional network do not have any negative impact on the robustness of the overall model. | 1. What is the focus of the paper regarding deriving lower bounds on adversarial attacks?
2. What are the strengths and weaknesses of the proposed approach in extending previous work for evaluating model robustness?
3. How does the reviewer assess the clarity and quality of the paper's content?
4. What are the limitations of the empirical evidence supporting the claims made in the paper?
5. What are some potential future works suggested by the reviewer? | Summary Of The Paper
Review | Summary Of The Paper
The authors present a method for deriving a lower bound on the loss of a specific model architecture under adversarial attack. The method heavily relies on previous work [Bhagoji, 2021]. The main contribution of the paper is therefore extending the concepts presented in [Bhagoji, 2021] for evaluating the input of a model to the output of later layers. Using their proposed method the authors perform an evaluation of the robustness of a simple feature extractor.
Review
Please find my full review of the paper "Lower Bounds on the Robustness of Fixed Feature Extractors to Test-Time Adversaries" below. Unfortunately I can not recommend the current version of the presented manuscript for acceptance at the ICLR conference. The main contributors to this decision is the unclarity in the contributions of the work, the empirical evidence supporting the claims made being too sparse, and the large amount of crucial work left for future work.
Before further elaborating on my reservations about the paper I would like to clearly state that I found the paper to be particularly hard to follow. Personally I believe that this is primarily due to the introduction not properly introducing the setting and aims of the paper, but I should also acknowledge that I was not entirely familiar with the work [Bhagoji, 2021] (on which this paper heavily builds) before getting assigned this paper for review. For this reason, I would be very interested in hearing from the other reviewers to what extend they found the paper hard to follow as well.
Contributions of work
If I understood correctly, the main contribution of the paper is to provide a lower bound on the adversarial robustness of specific models. This is in contrast with earlier work on adversarial robustness certification that focusses on arbitrary classification functions. While this does sound interesting, I struggle in finding how this differs from performing a simple adversarial robustness evaluation on a model using a standard adversarial attack. This should also provide a lower-bound on the true loss of the model under attack. I would be grateful if the authors could clarify this for me.
The method by which the authors achieve this lower-bound heavily builds upon earlier work [Bhagoji, 2021]. Again, if I understood correctly, the main contribution of the work presented here is extending this method to small (mostly linear) neural networks using a greedy search algorithm. Given that there is not true evaluation presented of how efficient this search algorithm is, I find it difficult to evaluation the significance of this contribution.
Empirical Evidence
The authors use their proposed method for determining a number general statements about layers that hypothetically lead to a measurable drop in robustness. Based on the rigour of the presented empirical evaluation, I do not believe that these general claims can be made. In general, the method proposed by the authors is only applied to a very limited set of network architectures. The models evaluated in the paper are in general very shallow and in my opinion not representative of architectures used in practice. For example, the architecture used is only applicable for binary classification.
The fact that this limitation of the architecture is not clearly communicated in the main body of the paper is something that I find to be especially troubling.
Future work
Lastly, I believe that the current version of the manuscript, unfortunately, leaves most of the interesting for future work. For example, the extension to CNNs, BatchNormalization and Residual Blocks is left for future work. With these often applied layers pushed to future work, I find it hard to judge the applicability of the presented method.
Updates after rebuttal
I've increased my score. Please see the comment chain for further clarification. |
ICLR | Title
Lower Bounds on the Robustness of Fixed Feature Extractors to Test-time Adversaries
Abstract
Understanding the robustness of machine learning models to adversarial examples generated by test-time adversaries is a problem of great interest. Recent theoretical work has derived lower bounds on how robust any model can be, when a data distribution and attacker constraints are specified. However, these bounds only apply to arbitrary classification functions and do not account for specific architectures and models used in practice, such as neural networks. In this paper, we develop a methodology to analyze the robustness of fixed feature extractors, which in turn provides bounds on the robustness of any classifier trained on top of it. The tightness of these bounds relies on the effectiveness of the method used to find collisions between pairs of perturbed examples at deeper layers. For linear feature extractors, we provide closed-form expressions for collision finding while for arbitrary feature extractors, we propose a bespoke algorithm based on the iterative solution of a convex program that provably finds collisions. We utilize our bounds to identify the layers of robustly trained models that contribute the most to a lack of robustness, as well as compare the same layer across different training methods to provide a quantitative comparison of their relative robustness.
1 INTRODUCTION
The robustness of machine learning models to test-time (evasion) attacks, particularly via adversarial examples, has been studied extensively from an empirical perspective (Croce et al., 2020; Papernot et al., 2016; Li et al., 2020; Biggio & Roli, 2017). A plethora of attacks (Szegedy et al., 2013; Carlini & Wagner, 2017; Bhagoji et al., 2018; Croce & Hein, 2020) and defenses (Madry et al., 2018; Zhang et al., 2019) has been proposed. The resulting arms race between proposed attacks and defenses, has led recent theoretical work (Dohmatob, 2019; Cullina et al., 2018; Mahloujifar et al., 2019; Bhagoji et al., 2019; 2021; Montasser et al., 2019) to focus on understanding the fundamental bounds of learning in the presence of adversarially perturbed inputs, in terms of both the existence of robust classifiers and the sample complexity required to learn them.
A line of research on this front has investigated lower bounds on the robustness attainable by any classifier when classifying data from distributions modified by adversarial perturbations (Pydi & Jog, 2020; Bhagoji et al., 2019; 2021). In particular, information-theoretic lower bounds on the minimum possible robust 0 − 1 and cross-entropy losses have been derived for arbitrary discrete distributions and very general attack models. However, the bounds from these papers are only applicable when considering the set of possible classifiers to be all measurable functions, which does not translate to practice. The classifier architecture and even some portion of the parameters are fixed (such as in transfer learning) in practical approaches to training machine learning models.
In this paper, we leverage this line of work to find lower bounds on the robustness of commonly used, fixed feature extractors. This provides a principled method to compare the robustness of feature extractors obtained by different training methods to arbitrary test-time adversaries. The key question we answer is:
What is the minimum robust loss incurred by any classifier trained on top of a fixed feature extractor?
Two challenges for training robust machine learning models motivate this question. First, since transfer learning is now common practice, it is incumbent upon the model trainer to choose a robust pre-trained feature extractor. Our work offers a principled and quantitative method to choose among
feature extractors obtained from different layers and training methods. Second, we are able to shed light on the impact of common architectural choices, such as activation functions and dimensionality reduction on robustness. This arises from our study of how the lower bound evolves from layer to layer within deep neural networks.
Importance of lower bounds on robustness: To determine classifier-agnostic lower bounds on robustness, earlier work focused on the interaction between points from different classes in the input space when perturbed through the construction of a conflict graph. Minimizing an appropriately defined loss function over this graph determines an information-theoretic lower bound on the loss. Intuitively, the denser the graph is, the higher the lower bound. These classifier-agnostic bounds are able to determine the plausibility of robust classification for a given adversary and dataset, and highlight practically relevant perturbation budget.
1.1 CONTRIBUTIONS
Lower bounds on robustness for fixed feature extractors: We significantly extend prior work by deriving lower bounds that depend on the architecture and weights of a fixed feature extractors. We construct a distance function over the input space that depends on the feature extractor in use. Our results have implications for both analyzing the robustness of already trained classifiers as well as determining feature extractors to use for applications such as transfer learning.
Bespoke algorithms for collision finding: For linear feature extractors such as the first layer of a neural network, we determine exact, closed form expressions to find collisions. Collision finding for non-linear feature extractors is a non-convex optimization problem, for which we propose a custom algorithm. We approach the collision-finding problem with a descent algorithm that solves a sequence of convex optimization problems over polytopes. This algorithm has the structural advantage of maintaining feasibility (for some adversarial budget constraint) at each step. Despite the fact that it does not find a global minimum, the solutions found are useful for obtaining not just approximations but actual lower bounds on optimal adversarial classification loss.
Empirical findings: We utilize our method to find numerical lower bounds on the robustness of fixed feature extractors trained on two different datasets with both fully-connected and convolutional architectures. Our results indicate that the use of dimensionality-reducing linear layers as well as the effective compression induced by ReLU activations lead to significantly less robust networks. Further, we confirm the oft-cited observation that mismatches between the adversary’s budget at train and test time impacts robustness negatively. Taken together, these findings point towards future design considerations for robust models.
2 DERIVING LOWER BOUNDS FOR FIXED FEATURE EXTRACTORS
In this section, we develop a method for evaluating the robustness of a feature extractor for classification in the presence of a test-time adversary. We characterize the optimal adversarial loss achievable by any classifier that uses the fixed feature extractor as its initial layer. This characterization is based on a conflict graph derived from a discrete data distribution and the constraints of the adversary.
Examples and labels: We consider a supervised classification problem with a test-time adversary. We have an example space X and a label space Y = {−1, 1}. Labeled examples are sampled from a joint probability distribution P over X × Y . Test-time adversary: The test-time adversary modifies a natural example subject to some constraints to generate an adversarial example that will be classified Goodfellow et al. (2015); Szegedy et al. (2013); Carlini & Wagner (2017). Formally, the adversary samples a labeled example (x, y) from P and selects x̃ ∈ N(x), where N : X → 2X̃ is the neighborhood function encoding the constraints on the adversary and X̃ is the space of adversarial examples.1 For all x, N(x) must be nonempty. This definition encompasses the `p family of constraints widely used in previous work.
1In most (but not all) settings that have previously been studied, X̃ = X . We believe that making the distinction helps to clarify some of our definitions: their applicability in this general context affects what properties they can be expected to have.
Measuring adversarial loss: We consider ‘soft’ classification functions (or classifiers) that map examples to probability distributions over the classes. These are h : X → ∆Y , where ∆Y = {p ∈ RY : p ≥ 0, ∑ y∈Y py = 1}. We measure classification performance with a loss function ` : ∆Y × Y → R, so the expected performance of a classifier h is E[supx̃∈N(x) `(h(x̃), y)], where (x, y) ∼ P . In the two class setting, h(x̃)−1 = 1−h(x̃)1, so any loss function that treats the classes symmetrically can be expressed as `(p, y) = `′(py). Additionally, `′ should be decreasing. Together, these allow the optimization over adversarial examples to be moved inside the loss function, giving E [ ` ( inf x̃∈N(x) h(x̃)y, y )] . Thus, the adversarial optimization can be analyzed by itself.
Definition 1. For a soft classifier h, the correct-classification probability qv that it achieves on an example v = (x, y) in the presence of an adversary is qv = inf x̃∈N(x) h(x̃)y . The space of achievable correct classification probabilities is
PV,N,H = ⋃ h∈H { q ∈ RV : ∀(x, y) ∈ V, 0 ≤ q(x,y) ≤ inf x̃∈N(x) h(x̃)y } .
Bhagoji et al. (2021) characterized PV,N,H in the case that H is the class of measurable functions X → ∆Y . For a data distribution P that is discrete with finite support V ⊆ X × Y , this allows the minimum adversarial loss achievable to be expressed as an optimization over PV,N,H:
inf h∈H
E(x,y)∼P [
sup x̃∈N(x)
`(h(x̃), y) ]
= inf q∈PV,N,H ∑ v∈V P ({v})`′(qv).
We extend their approach to analyze feature extractors as follows. Given a feature space Z and a fixed, measurable feature extractor f : X̃ → Z , define Hf = {h ∈ H : h = g ◦ f}: an element of Hf is some measureable g : Z → ∆Y composed with f . Our aim is to characterize PV,N,Hf so that we can optimize loss functions over it to evaluate the suitability of f .
Conflict graph: In Bhagoji et al. (2021), the notion of a conflict graph was used to record neighborhood intersections for pairs of points from different classes. When such an intersection exists, it is impossible for any classifier to correctly classify both of those points in the adversarial setting. We extend this notion to apply to the family of classifiers using a fixed feature extractor f . In our setting, a conflict exists between a pair of points when each of them has a neighbor that is mapped to the same point in the feature space.
We call the conflict graph GV,N,Hf , where V ⊆ X × Y . When we apply GV,N,f in order to understand classification of labeled examples with distribution P , we take V to be the support of P . The graph is bipartite: the vertex set is partitioned into parts Vc = V ∩ (X × {c}). The edge set is
EV,N,f = {((u, 1), (v,−1)) ∈ V1 × V−1 : ∃ũ ∈ N(u), ṽ ∈ N(v) such that f(ũ) = f(ṽ)}.
Lemma 1 (Feasible output probabilities). The set of correct classification probability vectors for support points V , adversarial constraint N , and hypothesis classHf is
PV,N,Hf = {q ∈ RV : q ≥ 0, q ≤ 1, Bq ≤ 1} (1)
where B ∈ RE×V is the edge incidence matrix of the conflict graph GV,N,f .
The proof is given in Appendix A and follows the structure of the proof in Bhagoji et al. (2021).
Approximating GV,N,f and PV,N,Hf : If instead of knowing the true conflict graph GV,N,f , we have some subgraph, then we can find a polytope that is a superset of the true PV,N,Hf . If we minimize some expected loss over this proxy polytope, we obtain a lower bound on the optimal loss over Hf . Because subgraphs of the conflict graph lead to valid lower bounds on optimal classification performance, we can use this method to evaluate the quality of a feature extractor f even if exact computation of the conflict graph is computationally intractable.
2.1 DISTANCE INTERPRETATION
The construction of a conflict graph and characterization of the feasible correct classification probabilities from the previous section apply to any feature extractor and neighborhood constraint. In
the most commonly studied settings, the neighborhood constraint arises from a distance function: Nε(x) = {x̃ ∈ X̃ : d(x, x̃) ≤ ε}. This parameter ε is the adversarial budget constraint. For any two examples u and v, a natural quantity to consider is the smallest adversarial budget that would cause the edge (u, v) be appear in the conflict graph:
ε∗(u, v) = inf{ε ≥ 0 : Nε(u) ∩Nε(v) 6= ∅} = inf x̃∈X̃ max(d(u, x̃), d(v, x̃)).
We will call this the distance on X induced by d. When X = X̃ = Rd and d(x, x′) = ‖x−x′‖p, the minimal adversarial budget is simply 12‖u − v‖p. Because the relationship to the distance used in the adversarial budget constraint is so simple, this quantity is not usually discussed independently. This definition generalizes easily to the setting of classification with a particular feature extractor, but the resulting quantity is much more interesting.
Definition 2. The distance induced on X by d and f is the minimum adversarial budget required to create a conflict between u and v after a feature extractor f :
ε∗f (u, v) = inf{ε ≥ 0 : f(Nε(u))∩f(Nε(v)) 6= ∅} = inf ũ,ṽ∈X̃ :f(ũ)=f(ṽ) max(d(u, ũ), d(v, ṽ)). (2)
This reduces to ε∗(u, v) when f is the identity function, or more generally any injective function. Observe that any choice of ũ and ṽ in (2) provide an upper bound on ε∗f (u, v), which is useful for finding a subgraph of GV,N,f and lower bounds on optimal classification performance inHf . Figure 1 illustrates the computation of ε∗f for a simple f that is related to the ReLU function.
Induced distance is not a distance on the feature space: The fact ε∗f is defined onX is essential: it is not possible to interpret it as a distance on Z . For a full explanation of this point, see Appendix B.
3 DISTANCE COMPUTATIONS FOR PRACTICAL FEATURE EXTRACTORS
Throughout the remainder of the paper, we will work in a more concrete setting. We assume that our example space is a real vector space and that adversarial examples are from the same space: X = X̃ = Rn1 . Let B ⊆ X be a nonempty, closed, convex, origin-symmetric set. These conditions imply the zero vector in contained in B. We take the neighborhood of any point to be a scaled, shifted version of this ball: Nε(x) : x + εB. Neighborhood constraints derived from `p norms fit into this class: we encode them by defining B = {δ ∈ Rn1 : ‖δ‖p ≤ 1}, which results in Nε(x) = {x̃ ∈ Rn1 : ‖x̃− x‖p ≤ ε}. We will focus on `2-based neighborhoods, but our algorithms could be adapted to any choice of B over which efficient optimization is possible.
3.1 LINEAR FEATURE EXTRACTORS
Suppose that our feature extractor is an affine linear function of the input example: f(x) = Lx+ k for some matrix L ∈ Rn2×n1 and vector k ∈ Rn2 . Then the distance εf (u, v) becomes
inf{ε ≥ 0 : {k + L(u+ δ) : δ ∈ εB} ∩ {k + L(u+ δ′) : δ′ ∈ εB} 6= ∅}.
Because {(ε, δ) ∈ R1+n1 : δ ∈ εB} is closed convex cone, εf (u, v) can be expressed as the following convex optimization problem:
inf ε subject to δ ∈ εB, δ′ ∈ εB, L(δ − δ′) = L(v − u). Because B is closed and the linear subspace is always nonempty, the infimum is achieved. Also, it is sufficient to consider (δ, δ′) satisfying δ′ = −δ: for any feasible (ε, δ, δ′), the point (ε, (δ − δ′)/2, (δ′ − δ)/2) is also feasible, has the same value, and satisfies the additional constraint. The feasibility of the symmetrized point uses the origin-symmetry of B. The simplified program is
min ε subject to δ ∈ εB, Lδ = L(v − u)/2. Thus the optimal adversarial strategy for creating conflict between u and v is intuitive: produce examples with the same features as the midpoint (u+ v)/2.
If B is the unit `2 ball, there are further simplifications. Consider the singular value decomposition L = UΣV T where we do not include zero singular values in Σ. Then the linear map given by UΣ is injective and can be canceled from the linear constraint on δ. The resulting program is
min ε subject to ‖δ‖2 ≤ ε, V T δ = 1
2 V T (v − u),
the optimal choice of δ is δ = 12V V T (v − u), and εf (u, v) = 12‖V T (v − u)‖2. Observe that this is the norm of a vector in Rn2 , i.e. the feature space, contrasting with our discussion in Section B. However, this is essentially the only case εf (u, v) simplifies into a feature space distance.
3.2 FULLY CONNECTED NEURAL NETWORKS WITH RELU ACTIVATIONS
For this feature extractor architecture, we have f = f (`) ◦ . . . ◦ f (1) where the layer i function is f (i) : Rni → Rni+1 , f (i)(z) = (k(i) + L(i)z)+. Here z+ represents the component-wise positive part of the vector z. Then εf (u, v) is the value of the following optimization problem:
min ε subject to δ ∈ εB, δ′ ∈ εB, (f (`) ◦ . . . ◦ f (1))(u+ δ) = (f (`) ◦ . . . ◦ f (1))(v + δ′). As in the linear case, the minimum exists because the cones are closed and the equality constraint is feasible. In contrast with the linear case, the equality constraint is nonconvex.
The local linear approximation to f (i)(z) around a point z′ is diag(s(i,z ′))(k(i)+L(i)z), where s(i,z ′) is a zero-one vector that depends on the sign pattern of k(i) +L(i)z′: s(i,z ′)
j = 1((k (i) +L(i)z′)j >
0). In other words, s(i,z ′) is the ReLU activation pattern at layer i when z′ is the input to that layer.
Using these linear approximations, the feasible set of (δ, δ′) ∈ Rn1+n1 satisfying the constraint f(u + δ) = f(v + δ′) can be decomposed as a union of polytopes: each activation pattern defines a linear subspace and there are some linear inequalities specifying the region where that activation pattern actually occurs. For a one-layer network, the linear piece for pattern s is
f(x) = diag(s)(k + Lz) for diag(2s− 1)(k + Lz) ≥ 0. Thus one of the polytopes composing the feasible set is (δ, δ′) ∈ Rn1+n1 satisfying
diag(s)(k + L(u+ δ)) = diag(s′)(k + L(v + δ′)),
(2 diag(s)− I)(k + L(u+ δ)) ≥ 0, (2 diag(s′)− I)(k + L(v + δ′)) ≥ 0.
In the one-layer case, the whole feasible region is covered by polytopes where s = s′. Observe that the dimension of the subspace satisfying the linear equality varies with s. When f contains multiple layers, each polytope in the feasible set is defined by a linear equality constraint involving feature vectors together with a linear inequality for the sign of each ReLU input.
We optimize over this feasible set with a descent algorithm: Algorithm 2 details the version for single-layer networks. The method generalizes to multiple layers and is used to obtain our results in Section 4.2). A full description of the general version is in Appendix C. We initialize our search in a polytope that we know to be nonempty because it contains the feature space collision induced by the midpoint of the two examples. Within each polytope, we can minimize the objective exactly by solving a linear cone program. We examine the dual variables associated with the linear inequality constraints to determine whether an adjacent polytope exists in which we can continue the search.
In the one layer case, the cone program that we solve, which depends on (L, k, u, v, s), is
min ε subject to (ε, δ, δ′) ∈ R1+n1+n1 , δ ∈ εB, δ′ ∈ εB, diag(s)L(u+ δ) = diag(s)L(v + δ′),
(2 diag(s)− I)(k + L(u+ δ)) ≥ 0, (2 diag(s)− I)(k + L(v + δ′)) ≥ 0.
Algorithm 1 Descent with midpoint initialization Input: u, v ∈ Rn1 , L ∈ Rn2×n1 , k ∈ Rn2 Output: ε ∈ R, δ, δ′ ∈ Rn1
1: z ← k + 12L(u+ v), ε← 1 2‖u− v‖ 2: for 0 ≤ j < n2 do 3: sj ← 1(zj > 0), 4: end for 5: repeat 6: εold ← ε 7: (ε, δ, δ′, z, z′)← ConeLP(L, k, u, v, s) 8: sold ← s 9: for 0 ≤ j < n2 do
10: if zj > 0 and z′j > 0 then 11: sj ← 1− sj 12: end if 13: end for 14: until sold = s or εold = ε
We need not just the values of the primal solution, but the values of dual variables associated with the linear inequalities in the solution to the dual problem. These are called z and z′ in Algorithm 2. When both zj > 0 and z′j > 0, the input to ReLU j is zero when both f(u+δ) and f(v+δ′) are computed, and the objective function could be decreased further by allowing the input to switch signs. In this case, we move to another polytope based on the new ReLU states and search there.
When we fail to find such pairs of dual variables, either the minimum is in the interior of the current polytope (and thus is supported only by cone constraints), or the minimum is on the boundary of the current polytope but there is no adjacent polytope in which we could continue the search. Thus we end the search.
Algorithm termination, convergence, and complexity: The descent algorithm will terminate in a finite number of iterations and will find a local minimum of the distance function. Since the feasible space is non-convex, any local descent procedure is not guaranteed to find the global minimum. The number of variables in the cone program is proportional to the number of ReLUs in the feature extractor plus the dimension of the example space. The time-complexity of a single iteration of the search is polynomial in the input dimension and number of ReLUs. The feasible region is the union of a finite but exponentially large number of convex polytopes. Due to the requirement that each iteration make progress, it is impossible to ever revisit a polytope. Thus, termination is guaranteed, but no polynomial bound on the number of iterations is available. We suspect that as in the case of the simplex algorithm for linear programming, input data resulting in an extremely large number of iterations may exist, but would have a delicate structure that is unlikely to arise in practice.
3.3 FURTHER COMMON LAYERS
Convolution: It is straightforward to represent a convolutional layer as a linear layer by constructing the appropriate Topelitz matrix, and thus our method applies directly.
Batch-normalization: Batch norm layers are simply affine functions of their inputs at test-time, and the transformations they induce can be easily included in a linear layer. Our method thus applies.
Max pooling: Max-pool layers, like ReLUs, are piecewise linear functions of their inputs, so constraints coming from the equality of max-pool outputs also lead to feasible regions that are the union of polytopes. This is a simple extension left for future work due to a lack of space.
Other activation functions and architectures: Injective activation functions such as Leaky ReLU, ELU and sigmoid will not lead to additional collisions. Further, since they are not piecewise, a
different descent algorithm would be needed. We note that our framework cannot find collisions in networks with both forward and backward flows, such as those with attention.
4 EVALUATION
In this section, we demonstrate how the methodology outlined in the previous sections for determining the robustness of a fixed feature extractor can be used in practice. Our detailed results provide insights on how the overall robustness of a neural network evolves through its different layers and is impacted by the training procedure used.
4.1 FROM COLLISION-FINDING TO LOWER BOUNDS
We find lower bounds on robust loss over the training set by minimizing a loss function overGV,N,f .
Vertices V from training data: The vertex set V is a representation of the training data. We use 2-class problems derived from MNIST (LeCun & Cortes, 1998) or Fashion MNIST (Xiao et al., 2017) as in previous work (Pydi & Jog, 2020; Bhagoji et al., 2019; 2021).
Neighborhood function N : We use the common `2-norm ball constraint (Madry et al., 2018), in which the adversary’s strength is parametrized by the radius of the ball 2.
Fixed feature extractors f : We use the composition of the layers of convolutional and fullyconnected DNNs as our fixed feature extractors (details in Appendix E). The first layer of any of these networks (before a non-linear activation is applied) behaves as a linear feature extractor. Subsequent layers behave as non-linear feature extractors. These networks are trained using one of standard cross-entropy loss minimization or robust training with either PGD-based adversarial training (Madry et al., 2018) or the TRADES loss (Zhang et al., 2019).
Edge set E from collision finding: Algorithm 2 provides a greedy, iterative method to find collisions between feature representations of pairs of points from different classes. Each successful collision is an edge in the bipartite conflict graph. Each iteration of this algorithm can be cast as a convex program in the form of a linear cone (as detailed in Appendix D). We solve this linear cone program using CVXOPT (Andersen et al., 2013). Since we are working with an `2-norm constrained adversary, we can speed up computation by projecting the inputs onto the space spanned by the right singular vectors of the first linear layer. For linear convolutional layers, we cast the convolution operation as a matrix multiplication to enable the use of the closed form derived in Section 3.1. We also experimented with a modification of the Auto-PGD (Croce & Hein, 2020) algorithm to find collisions at deeper layers with fixed budgets as done in Engstrom et al. (2019). However, this was less effective and more expensive at finding collisions, so all further results use Algorithm 2.
Computing the lower bound from the conflict graph: The conflict graph determines the set of possible output probabilities for each vertex (training data point) for the optimal classifier. Minimizing the 0 − 1 and cross-entropy losses over this graph thus results in a lower bound over these losses since any classifier must incur a larger loss than the optimal classifier, by definition. We use the method from Bhagoji et al. (2021) over the conflict graphs we derive from deeper layer representations. Results in the main body are for the cross-entropy loss (others in Appendix F).
4.2 RESULTS
Interpreting lower bound values: The lower bound on the input space representations is computed by considering all possible classifiers and is thus the best possible, while bounds for other representations are computed by restricting the space of possible classifiers. Intuitively, the latter will always be larger than or equal to the former. The key metric to understanding the robustness of feature extractors is to check how much larger the corresponding lower bounds are. The magnitude of this difference measures the contribution of that layer to the overall network’s lack of robustness.
Robustness of linear layer representations: Using the procedure outlined in Section 3.1, we find collisions after the first linear layer and construct a conflict graph for several models (Fig. 2) for
2We are aware of the critiques of this constraint (Gilmer et al., 2018) and only use it to compare with previous work.
How does robustness evolve across layers? Having looked at the how robust linear layer representations are, it is pertinent to consider other representations derived from other layers deeper in the network. Of particular interest are post-ReLU representations for models with benign (Fig. 3) and robust training (Fig. 4). We find that the first ReLU activation layer contributes significantly to an increase in the lower bound for both benign and robustly trained networks. We observe that postReLU representations tend to be sparse, leading to an easier search problem and a larger number of collisions. The second linear layer, on the other hand, does not lead to much additional increase in the loss. This is a property of the particular architecture we use, and a smaller linear layer is likely to lead to larger increases in loss. Finally, the second set of ReLU activations does have a measurable impact on robustness, particularly for the benign network. The impact of layer width on robustness is discussed in Appendix F.1.
How does the parametrization of robust training impact layer-wise robustness? As expected, layers extracted from robustly trained network are more robust than their benign counterparts for corresponding values of . We find a significant difference between PGD-based adversarial training and TRADES in terms of the robustness of their first linear layers (Fig. 2), but this largely disappears by the second ReLU activation layer (Fig. 5). Interestingly, we observe a phenomenon where layers of a network robustly trained using higher values of train can be less robust than those using a lower value, when the value of at which evaluation is performed is less than the higher train.
Impact of linear convolutional layers on robustness: As discussed in Section 3.3, the first convolutional layer can be thought as simply a matrix multiplication, although the size of the resulting matrix can be large. We find that since the effective dimension of the resulting features is much larger than the input dimension for the datasets we consider, the linear convolutional layer does not lead to an increase in the lower bound (Fig. 11 in Appendix).
5 RELATED WORK AND DISCUSSION
5.1 RELATED WORK
Lower bounds on robustness: For cases when the data distribution is specified, Dohmatob (2019) and Mahloujifar et al. (2019) use the ‘blowup’ property to determine bounds on the robust loss, given
Certification and Verification: Work on certified robustness has considered techniques for training neural networks such that the resulting models are provably robust to perturbations upper bounded by a given budget (Kolter & Wong, 2018; Raghunathan et al., 2018; Cohen et al., 2019; Li et al., 2020). Typically, these models can only be certified to be robust to small budgets. In contrast, our work provides lower bounds on the robustness of partial models which are applicable across a large range of budgets. Approaches to verifying the robustness of neural networks (Bunel et al., 2017; Tjeng et al., 2019; Gowal et al., 2018) are closely related to our work, but differ in that they consider fixed end-to-end networks while we focus on a layer-by-layer analysis, allowing us to argue about the robustness of classifiers trained on top of given feature extractors.
5.2 DISCUSSION
Implications for training robust models: Our results indicate that layers in the network that reduce the effective dimension of their incoming inputs have the largest negative impact on robustness (see further results in Appendix F.1). Two prominent examples of this are ReLU layers that reduce effective dimension by only considering non-negative outputs and fully connected layers with fewer outputs than inputs. On the other hand, linear convolutional layers do not have a negative impact on robustness. This indicates that not reducing the effective dimension of any deeper feature to be lower than that of the input data is likely to benefit robustness. Further, our results confirm that the use of larger values of train does not necessarily translate to higher robustness at lower budgets. This points towards the need for algorithms that can effectively train a network to be robust to more than a single adversary at a time. Finally, we find a qualitative difference in the layers learned using PGD-training and TRADES, implying interesting learning dynamics with different robust losses.
Extending to state-of-the-art models and datasets: All of our experiments in this paper are on simple models and datasets which demonstrate the feasibility and use of our method. However, state-of-the art feature extractors for datasets such as Imagenet are far deeper than those considered in this paper. Thus, our algorithm would need to be made considerably faster to handle these cases. While our framework, can handle skip connections, networks with attention are beyond its scope. Nevertheless, the feature extractors we consider are robust for the tasks we evaluate them on, making our results and conclusions representative.
A PROOFS
Proof of Lemma 1. Suppose that e = ((u, 1), (v,−1)) ∈ E . Then, there are some z̃ ∈ Z , ũ ∈ N(u), and ṽ ∈ N(v) such that f(ũ) = f(ṽ) = z̃. We have qu ≤ h(f(ũ))1 = h(z̃)1, qv ≤ h(f(ṽ))−1 = h(z̃)−1, and h(z̃)1 + h(z̃)−1 = 1. Combining these gives the constraint (Bq)e ≤ 1, which appears in (1).
Now, we will show that each vector q in the polytope is achievable by some h. The construction is simple: at each point in the feature space, assign the largest possible probability to class 1: let h(z̃)1 = supw:z̃∈f(N(w)) qw and h(z̃)−1 = 1 − h(z̃)1. This achieves the desired performance for examples from class 1:
inf ũ∈N(u) h(f(ũ))1 = inf ũ∈N(u) sup w:f(ũ)∈f(N(w)) qw ≥ inf ũ∈N(u) qu = qu.
For an example is v in class −1 we have
inf ṽ∈N(v) h(f(ṽ))−1 = inf ṽ∈N(v)
( 1− sup
w:f(ṽ)∈f(N(w)) qw ) = inf
ṽ∈N(v) inf w:f(ṽ)∈f(N(w)) (1− qw)
= inf w:((w,1),(v,−1))∈E
(1− qw)
≥ qv. The final inequality uses the fact that q satisfies Bq ≤ 1.
B INDUCED DISTANCE IS NOT A DISTANCE ON Z
The fact ε∗f is defined on X is essential. It is not possible to interpret the induced distance as a distance in the feature space Z . This is because f(Nε(x)), the set of features of points near x, cannot be derived from f(N0(x)), the set of features of the uncorrupted version of x. This is because distinct choices of x may lead to the same features f(N0(x)), but f may vary more in the neighborhood of one choice of x that the other.
An example is helpful to illustrate this point. Let X = X̃ = R2, let Z = R, let Nε(x) be the closed `2 ball of radius ε around x, and let f(x0, x1) = arctan(x1/x0) for x 6= 0 (the value that we pick for f(0, x1) is irrelevant). In other words, f finds the angle between the horizontal axis and the line containing x. The range of values that the adversary can cause f(x̃) to take depends on ‖x‖. If ‖x‖2 < ε, f(x̃) can be any angle from 0 to π, and if ‖x‖2 ≥ ε, |f(x̃)− f(x)| ≤ arcsin(ε/‖x‖2). This example also illustrates that εf is not necessarily a metric, even in cases where d is a metric. Given u, v ∈ R2, we have εf (u, αu) = 0, εf (u, αu) = 0, and εf (αu, αv) can be made arbitrarily small by selecting α to be small. Despite this, ε(u, v) will be on the order of min(‖u‖, ‖v‖). An alternative approach to studying the induced distance is to define an adversarial classification problem on Z by taking NZε (z) = f(Nε(f−1({z}))). Note that this construction only makes sense when X = X̃ and thus the domain of f is X . This is more conservative: for any feature point z it considers the worst-case x with feature z: the x in whose neighborhood f varies the most.
C DESCENT ALGORITHM FOR MULTIPLE LAYER NETWORKS
We start by repeating the notion used in Section 3.2: f = f (`) ◦ . . . ◦ f (1) where the layer i function is f (i) : Rni → Rni+1 , f (i)(z) = (k(i) + L(i)z)+. Then εf (u, v) is the value of the following optimization problem:
min ε subject to δ ∈ εB, δ′ ∈ εB, (f (`) ◦ . . . ◦ f (1))(u+ δ) = (f (`) ◦ . . . ◦ f (1))(v + δ′). Using the local linear approximation, the equality constraint becomes
diag(s(`))(k(i) + L(`)(. . . diag(s(1))(k(1) + L(1)(u+ δ))))
= diag(s′(`))(k(i) + L(`)(. . . diag(s′(1))(k(1) + L(1))(v + δ′)))).
and for each 1 ≤ i ≤ `, the inequalities
(2 diag(s(i))− I)(k(i) + L(i)(. . . diag(s(1))(k(1) + L(1)(u+ δ)))) ≥ 0 (2 diag(s′(i))− I)(k(i) + L(i)(. . . diag(s′(1))(k(1) + L(1)(v + δ′)))) ≥ 0
must hold in order for the linear approximation to be valid. Any collision must be in a polytope with s(`) = s′(`), which is why we only needed one set of s variables in the single layer case. However, ReLU activation variables for earlier layers cannot be merged.
The main modification to algorithm is to the process of changing the s variables after each iteration. The variables for all but the final layer are allowed to change independently, while the variables in the final layer are changed together following the same rule as in the single layer algorithm.
Algorithm 2 Descent with midpoint initialization Input: u, v ∈ Rn1 , L ∈ Rn2×n1 , k ∈ Rn2 Output: ε ∈ R, δ, δ′ ∈ Rn1
1: z ← k + 12L(u+ v), ε← 1 2‖u− v‖ 2: for 0 ≤ j < n2 do 3: sj ← 1(zj > 0), 4: end for 5: repeat 6: εold ← ε 7: (ε, δ, δ′, z, z′)← ConeLP(L(1) . . . L(`), k(1) . . . k(`), u, v, s(1) . . . s(`), s′(1) . . . s′(`−1)) 8: sold ← s 9: for 1 ≤ i ≤ `− 1 do
10: for 0 ≤ j < ni+1 do 11: if z(i)j > 0 then 12: s(i)j ← 1− s (i) j 13: end if 14: if z′(i)j > 0 then 15: s′(i)j ← 1− s ′(i) j 16: end if 17: end for 18: end for 19: for 0 ≤ j < n`+1 do 20: if z(`)j > 0 and z ′(`) j > 0 then 21: s(`)j ← 1− s (`) j 22: end if 23: end for 24: until sold = s or εold = ε
D CONVERTING COLLISION FINDING TO A LINEAR CONE PROGRAM
In this section, we demonstrate how the collision finding problem after one linear and one ReLU layer can be cast as a linear cone program.
We have a network with first layer v 7→ Lv + k where L ∈ Rn1×n0 and k ∈ Rn1 . Given a pair of points (v′, v′′) ∈ R2n0 we would like to search over the space of (δ′, δ′′) ∈ R2n0 such that (L(v′ + δ′)− k)+ = (L(v′′ + δ′′)− k)+. This space is always nonempty because we can take v′ + δ′ = v′′ + δ′′ = (v′ + v′′)/2.
Let S ⊆ [n1] be the subset of active ReLUs. Let F ∈ R|S|×n1 be the inclusion indicator matrix for the subset: Fi,j = 1 if and only if j ∈ S and |S ∩ [j]| = i (there are exactly i elements of S strictly smaller than j, so j is the i + 1st element of S). Let D be the diagonal matrix with Dj,j = 1 for j ∈ S and Dj,j = −1 for j 6∈ S.
For j ∈ S (the active ReLUs), we need the constraint (L(v′ + δ′) + k)j = (L(v′ + δ′) + k)j . In matrix form, this is FL(v′ + δ′) = FL(v′′ + δ′′),
For j ∈ S, we need (L(v′ + δ′) + k)j ≥ 0, (L(v′′ + δ′′) + k)j ≥ 0, and for j 6∈ S, we need (L(v′ + δ′) + k)j ≤ 0, (L(v′′ + δ′′) + k)j ≤ 0. In matrix form, these are D(L(v′ + δ′) + k) ≥ 0, D(L(v′′ + δ′′) + k) ≥ 0. Our objective is max(‖δ′‖2, ‖δ′′‖2). We will replace with a linear objective by adding two additional variables (t′, t′′) satisfying t′ ≥ ‖δ′‖2 and t′′ ≥ ‖δ′′‖2. The cvx-opt system is
minimize cTx subject to Gx+ s = h
Ax = b
s 0
The cones involved are a nonnegative orthant of dimension 2n1 and two second order cones, each of dimension n0 + 1. Thus s = (D(L(v′ + δ′) + k), D(L(v′′ + δ′′) + k), t′, δ′, t′′, δ′′). We then let x = (t, δ′, δ′′). The matrix relating s and x is G ∈ R(n1+n1+1+n0+1+n0)×(1+n0+n0) and s = h−Gx. The block structure is
G = 0 −DL 0 0 0 −DL −1 0 0 0 −I 0 −1 0 0 0 0 −I h = D(Lv′ + k) D(Lv′′ + k) 0 0 0 0 We use Ax = b to encode FL(δ′ − δ′′) = −FL(v′ − v′′), so A ∈ R|S|×(1+n0+n0), b ∈ R|S|, and
A = (0 FL −FL) b = −FL(v′ − v′′).
Finally,
c = ( 1 0 0 ) .
D.1 FACTORIZED L
If n1 < n0, then we can take advantage of the fact that the rank of L is at most n1. Let L = UΣV T .
Replace L(v + δ) with UΣ(V T v + ), n0 with size of Σ.
New G and A:
G = 0 −DUΣ 0 0 0 −DUΣ −1 0 0 0 −I 0 −1 0 0 0 0 −I A = (0 FUΣ −FUΣ) Also, the length of c is reduced. The constant vectors h and b are unchanged.
E FURTHER EXPERIMENTAL DETAILS
We consider the following two architectures for our experiments:
1. 3-layer fully connected neural network (FCNN): 300 FC-ReLU-200 FC-ReLU-2 FC 2. 4-layer convolutional neural network (CNN): 20×5×5 conv.-BN-ReLU- 2×2 MaxPool-
50×5×5 conv.-BN-ReLU- 2×2 MaxPool- 500 FC - 2 FC
We also construct a ‘Small’ and ‘Smaller’ version of the FCNN with layers that are 2× and 4× narrower respectively.
F ADDITIONAL RESULTS
F.1 IMPACT OF REDUCTION OF NETWORK SIZE
In Figure 6, we compare the robustness of representations obtained from fully connected networks with decreasing layer sizes. The ‘Regular’ network is the one used throughout, while the ‘Small’ and ‘Smaller’ networks have corresponding layers that are 2× and 4× narrower respectively. We can clearly see that as the width of the feature extractor decreases, so does its robustness.
F.2 0− 1 LOSS RESULTS
In Figures 7, 8, 9 and 10 we provide lower bounds on the 0 − 1 loss in the same settings as those considered in the main body of the paper. We note that the results and conclusions remain the same qualitatively.
F.3 LINEAR CONVOLUTIONAL LAYERS
In Figure 11, we can see that the representations extracted from the first linear layer of a convolutional network do not have any negative impact on the robustness of the overall model. | 1. What is the focus of the paper regarding adversarial loss?
2. What are the strengths and weaknesses of the proposed algorithm for computing the lower bound?
3. What is the reviewer's concern regarding the motivation for calculating the lower bound on the adversarial loss?
4. How does the reviewer assess the theoretical contribution of the paper, particularly in comparison to related works?
5. What are the minor comments or suggestions for improvement in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper investigates the lower bound on the adversarial loss with the given specific feature extractor. The authors develop an algorithm to compute such a lower bound for the feature extractor employing the forward feed network with ReLU activation. The experiments demonstrate the usage of the proposed algorithm.
Review
Strength:
The authors develop a reasonable algorithm for constructing the conflict graph for a specific fixed feature extractor.
Weakness:
Motivation of calculating the lower bound on the adversarial loss for the fixed feature extractor is somewhat unclear.
The paper lacks the analysis for justifying the proposed algorithm.
My current decision is acceptance. However, the motivation of the fixed feature extractor is unclear for me. I'd like the authors to clarify the motivation for calculating the lower bound on the adversarial loss with the fixed feature extractor.
The motivations in the related works, including Pydi & Jog 2020 and Bhagoji et al. 2019; 2021, are understandable. The lower bounds on the Bayes optimal adversarial loss are useful for evaluating the performance of the robust learning algorithms and for confirming the fundamental limit of the robustness. The lower bound investigated in this paper is no longer the fundamental limit. Because there is no necessity to employ the pre-trained feature extractor, I cannot imagine the situation where the use of the given feature extractor is enforced. So, I'm wondering if such a lower bound is useful for evaluating something. In the experiments, the authors evaluated the lower bounds with fixed first 1 or 2 layers as usage of the proposed method; however, it is unclear how to use the evaluated adversarial loss.
The theoretical result, i.e., Lemma 1, is the straightforward extension of Bhagoji et al. 2021's result. The theoretical contribution of this paper seems to be incremental.
The main contribution of this paper is the algorithm development shown in Section 3. The authors provide the reasonable construction of the algorithm for certifying the existence of the specific edge in the conflict graph. However, it is non-trivial that the present algorithm converges to the desired solution. It is necessary to clarify this point.
Minor comments:
In the second paragraph on page 3, the right bracket is missing in the adversarial loss. |
ICLR | Title
Lower Bounds on the Robustness of Fixed Feature Extractors to Test-time Adversaries
Abstract
Understanding the robustness of machine learning models to adversarial examples generated by test-time adversaries is a problem of great interest. Recent theoretical work has derived lower bounds on how robust any model can be, when a data distribution and attacker constraints are specified. However, these bounds only apply to arbitrary classification functions and do not account for specific architectures and models used in practice, such as neural networks. In this paper, we develop a methodology to analyze the robustness of fixed feature extractors, which in turn provides bounds on the robustness of any classifier trained on top of it. The tightness of these bounds relies on the effectiveness of the method used to find collisions between pairs of perturbed examples at deeper layers. For linear feature extractors, we provide closed-form expressions for collision finding while for arbitrary feature extractors, we propose a bespoke algorithm based on the iterative solution of a convex program that provably finds collisions. We utilize our bounds to identify the layers of robustly trained models that contribute the most to a lack of robustness, as well as compare the same layer across different training methods to provide a quantitative comparison of their relative robustness.
1 INTRODUCTION
The robustness of machine learning models to test-time (evasion) attacks, particularly via adversarial examples, has been studied extensively from an empirical perspective (Croce et al., 2020; Papernot et al., 2016; Li et al., 2020; Biggio & Roli, 2017). A plethora of attacks (Szegedy et al., 2013; Carlini & Wagner, 2017; Bhagoji et al., 2018; Croce & Hein, 2020) and defenses (Madry et al., 2018; Zhang et al., 2019) has been proposed. The resulting arms race between proposed attacks and defenses, has led recent theoretical work (Dohmatob, 2019; Cullina et al., 2018; Mahloujifar et al., 2019; Bhagoji et al., 2019; 2021; Montasser et al., 2019) to focus on understanding the fundamental bounds of learning in the presence of adversarially perturbed inputs, in terms of both the existence of robust classifiers and the sample complexity required to learn them.
A line of research on this front has investigated lower bounds on the robustness attainable by any classifier when classifying data from distributions modified by adversarial perturbations (Pydi & Jog, 2020; Bhagoji et al., 2019; 2021). In particular, information-theoretic lower bounds on the minimum possible robust 0 − 1 and cross-entropy losses have been derived for arbitrary discrete distributions and very general attack models. However, the bounds from these papers are only applicable when considering the set of possible classifiers to be all measurable functions, which does not translate to practice. The classifier architecture and even some portion of the parameters are fixed (such as in transfer learning) in practical approaches to training machine learning models.
In this paper, we leverage this line of work to find lower bounds on the robustness of commonly used, fixed feature extractors. This provides a principled method to compare the robustness of feature extractors obtained by different training methods to arbitrary test-time adversaries. The key question we answer is:
What is the minimum robust loss incurred by any classifier trained on top of a fixed feature extractor?
Two challenges for training robust machine learning models motivate this question. First, since transfer learning is now common practice, it is incumbent upon the model trainer to choose a robust pre-trained feature extractor. Our work offers a principled and quantitative method to choose among
feature extractors obtained from different layers and training methods. Second, we are able to shed light on the impact of common architectural choices, such as activation functions and dimensionality reduction on robustness. This arises from our study of how the lower bound evolves from layer to layer within deep neural networks.
Importance of lower bounds on robustness: To determine classifier-agnostic lower bounds on robustness, earlier work focused on the interaction between points from different classes in the input space when perturbed through the construction of a conflict graph. Minimizing an appropriately defined loss function over this graph determines an information-theoretic lower bound on the loss. Intuitively, the denser the graph is, the higher the lower bound. These classifier-agnostic bounds are able to determine the plausibility of robust classification for a given adversary and dataset, and highlight practically relevant perturbation budget.
1.1 CONTRIBUTIONS
Lower bounds on robustness for fixed feature extractors: We significantly extend prior work by deriving lower bounds that depend on the architecture and weights of a fixed feature extractors. We construct a distance function over the input space that depends on the feature extractor in use. Our results have implications for both analyzing the robustness of already trained classifiers as well as determining feature extractors to use for applications such as transfer learning.
Bespoke algorithms for collision finding: For linear feature extractors such as the first layer of a neural network, we determine exact, closed form expressions to find collisions. Collision finding for non-linear feature extractors is a non-convex optimization problem, for which we propose a custom algorithm. We approach the collision-finding problem with a descent algorithm that solves a sequence of convex optimization problems over polytopes. This algorithm has the structural advantage of maintaining feasibility (for some adversarial budget constraint) at each step. Despite the fact that it does not find a global minimum, the solutions found are useful for obtaining not just approximations but actual lower bounds on optimal adversarial classification loss.
Empirical findings: We utilize our method to find numerical lower bounds on the robustness of fixed feature extractors trained on two different datasets with both fully-connected and convolutional architectures. Our results indicate that the use of dimensionality-reducing linear layers as well as the effective compression induced by ReLU activations lead to significantly less robust networks. Further, we confirm the oft-cited observation that mismatches between the adversary’s budget at train and test time impacts robustness negatively. Taken together, these findings point towards future design considerations for robust models.
2 DERIVING LOWER BOUNDS FOR FIXED FEATURE EXTRACTORS
In this section, we develop a method for evaluating the robustness of a feature extractor for classification in the presence of a test-time adversary. We characterize the optimal adversarial loss achievable by any classifier that uses the fixed feature extractor as its initial layer. This characterization is based on a conflict graph derived from a discrete data distribution and the constraints of the adversary.
Examples and labels: We consider a supervised classification problem with a test-time adversary. We have an example space X and a label space Y = {−1, 1}. Labeled examples are sampled from a joint probability distribution P over X × Y . Test-time adversary: The test-time adversary modifies a natural example subject to some constraints to generate an adversarial example that will be classified Goodfellow et al. (2015); Szegedy et al. (2013); Carlini & Wagner (2017). Formally, the adversary samples a labeled example (x, y) from P and selects x̃ ∈ N(x), where N : X → 2X̃ is the neighborhood function encoding the constraints on the adversary and X̃ is the space of adversarial examples.1 For all x, N(x) must be nonempty. This definition encompasses the `p family of constraints widely used in previous work.
1In most (but not all) settings that have previously been studied, X̃ = X . We believe that making the distinction helps to clarify some of our definitions: their applicability in this general context affects what properties they can be expected to have.
Measuring adversarial loss: We consider ‘soft’ classification functions (or classifiers) that map examples to probability distributions over the classes. These are h : X → ∆Y , where ∆Y = {p ∈ RY : p ≥ 0, ∑ y∈Y py = 1}. We measure classification performance with a loss function ` : ∆Y × Y → R, so the expected performance of a classifier h is E[supx̃∈N(x) `(h(x̃), y)], where (x, y) ∼ P . In the two class setting, h(x̃)−1 = 1−h(x̃)1, so any loss function that treats the classes symmetrically can be expressed as `(p, y) = `′(py). Additionally, `′ should be decreasing. Together, these allow the optimization over adversarial examples to be moved inside the loss function, giving E [ ` ( inf x̃∈N(x) h(x̃)y, y )] . Thus, the adversarial optimization can be analyzed by itself.
Definition 1. For a soft classifier h, the correct-classification probability qv that it achieves on an example v = (x, y) in the presence of an adversary is qv = inf x̃∈N(x) h(x̃)y . The space of achievable correct classification probabilities is
PV,N,H = ⋃ h∈H { q ∈ RV : ∀(x, y) ∈ V, 0 ≤ q(x,y) ≤ inf x̃∈N(x) h(x̃)y } .
Bhagoji et al. (2021) characterized PV,N,H in the case that H is the class of measurable functions X → ∆Y . For a data distribution P that is discrete with finite support V ⊆ X × Y , this allows the minimum adversarial loss achievable to be expressed as an optimization over PV,N,H:
inf h∈H
E(x,y)∼P [
sup x̃∈N(x)
`(h(x̃), y) ]
= inf q∈PV,N,H ∑ v∈V P ({v})`′(qv).
We extend their approach to analyze feature extractors as follows. Given a feature space Z and a fixed, measurable feature extractor f : X̃ → Z , define Hf = {h ∈ H : h = g ◦ f}: an element of Hf is some measureable g : Z → ∆Y composed with f . Our aim is to characterize PV,N,Hf so that we can optimize loss functions over it to evaluate the suitability of f .
Conflict graph: In Bhagoji et al. (2021), the notion of a conflict graph was used to record neighborhood intersections for pairs of points from different classes. When such an intersection exists, it is impossible for any classifier to correctly classify both of those points in the adversarial setting. We extend this notion to apply to the family of classifiers using a fixed feature extractor f . In our setting, a conflict exists between a pair of points when each of them has a neighbor that is mapped to the same point in the feature space.
We call the conflict graph GV,N,Hf , where V ⊆ X × Y . When we apply GV,N,f in order to understand classification of labeled examples with distribution P , we take V to be the support of P . The graph is bipartite: the vertex set is partitioned into parts Vc = V ∩ (X × {c}). The edge set is
EV,N,f = {((u, 1), (v,−1)) ∈ V1 × V−1 : ∃ũ ∈ N(u), ṽ ∈ N(v) such that f(ũ) = f(ṽ)}.
Lemma 1 (Feasible output probabilities). The set of correct classification probability vectors for support points V , adversarial constraint N , and hypothesis classHf is
PV,N,Hf = {q ∈ RV : q ≥ 0, q ≤ 1, Bq ≤ 1} (1)
where B ∈ RE×V is the edge incidence matrix of the conflict graph GV,N,f .
The proof is given in Appendix A and follows the structure of the proof in Bhagoji et al. (2021).
Approximating GV,N,f and PV,N,Hf : If instead of knowing the true conflict graph GV,N,f , we have some subgraph, then we can find a polytope that is a superset of the true PV,N,Hf . If we minimize some expected loss over this proxy polytope, we obtain a lower bound on the optimal loss over Hf . Because subgraphs of the conflict graph lead to valid lower bounds on optimal classification performance, we can use this method to evaluate the quality of a feature extractor f even if exact computation of the conflict graph is computationally intractable.
2.1 DISTANCE INTERPRETATION
The construction of a conflict graph and characterization of the feasible correct classification probabilities from the previous section apply to any feature extractor and neighborhood constraint. In
the most commonly studied settings, the neighborhood constraint arises from a distance function: Nε(x) = {x̃ ∈ X̃ : d(x, x̃) ≤ ε}. This parameter ε is the adversarial budget constraint. For any two examples u and v, a natural quantity to consider is the smallest adversarial budget that would cause the edge (u, v) be appear in the conflict graph:
ε∗(u, v) = inf{ε ≥ 0 : Nε(u) ∩Nε(v) 6= ∅} = inf x̃∈X̃ max(d(u, x̃), d(v, x̃)).
We will call this the distance on X induced by d. When X = X̃ = Rd and d(x, x′) = ‖x−x′‖p, the minimal adversarial budget is simply 12‖u − v‖p. Because the relationship to the distance used in the adversarial budget constraint is so simple, this quantity is not usually discussed independently. This definition generalizes easily to the setting of classification with a particular feature extractor, but the resulting quantity is much more interesting.
Definition 2. The distance induced on X by d and f is the minimum adversarial budget required to create a conflict between u and v after a feature extractor f :
ε∗f (u, v) = inf{ε ≥ 0 : f(Nε(u))∩f(Nε(v)) 6= ∅} = inf ũ,ṽ∈X̃ :f(ũ)=f(ṽ) max(d(u, ũ), d(v, ṽ)). (2)
This reduces to ε∗(u, v) when f is the identity function, or more generally any injective function. Observe that any choice of ũ and ṽ in (2) provide an upper bound on ε∗f (u, v), which is useful for finding a subgraph of GV,N,f and lower bounds on optimal classification performance inHf . Figure 1 illustrates the computation of ε∗f for a simple f that is related to the ReLU function.
Induced distance is not a distance on the feature space: The fact ε∗f is defined onX is essential: it is not possible to interpret it as a distance on Z . For a full explanation of this point, see Appendix B.
3 DISTANCE COMPUTATIONS FOR PRACTICAL FEATURE EXTRACTORS
Throughout the remainder of the paper, we will work in a more concrete setting. We assume that our example space is a real vector space and that adversarial examples are from the same space: X = X̃ = Rn1 . Let B ⊆ X be a nonempty, closed, convex, origin-symmetric set. These conditions imply the zero vector in contained in B. We take the neighborhood of any point to be a scaled, shifted version of this ball: Nε(x) : x + εB. Neighborhood constraints derived from `p norms fit into this class: we encode them by defining B = {δ ∈ Rn1 : ‖δ‖p ≤ 1}, which results in Nε(x) = {x̃ ∈ Rn1 : ‖x̃− x‖p ≤ ε}. We will focus on `2-based neighborhoods, but our algorithms could be adapted to any choice of B over which efficient optimization is possible.
3.1 LINEAR FEATURE EXTRACTORS
Suppose that our feature extractor is an affine linear function of the input example: f(x) = Lx+ k for some matrix L ∈ Rn2×n1 and vector k ∈ Rn2 . Then the distance εf (u, v) becomes
inf{ε ≥ 0 : {k + L(u+ δ) : δ ∈ εB} ∩ {k + L(u+ δ′) : δ′ ∈ εB} 6= ∅}.
Because {(ε, δ) ∈ R1+n1 : δ ∈ εB} is closed convex cone, εf (u, v) can be expressed as the following convex optimization problem:
inf ε subject to δ ∈ εB, δ′ ∈ εB, L(δ − δ′) = L(v − u). Because B is closed and the linear subspace is always nonempty, the infimum is achieved. Also, it is sufficient to consider (δ, δ′) satisfying δ′ = −δ: for any feasible (ε, δ, δ′), the point (ε, (δ − δ′)/2, (δ′ − δ)/2) is also feasible, has the same value, and satisfies the additional constraint. The feasibility of the symmetrized point uses the origin-symmetry of B. The simplified program is
min ε subject to δ ∈ εB, Lδ = L(v − u)/2. Thus the optimal adversarial strategy for creating conflict between u and v is intuitive: produce examples with the same features as the midpoint (u+ v)/2.
If B is the unit `2 ball, there are further simplifications. Consider the singular value decomposition L = UΣV T where we do not include zero singular values in Σ. Then the linear map given by UΣ is injective and can be canceled from the linear constraint on δ. The resulting program is
min ε subject to ‖δ‖2 ≤ ε, V T δ = 1
2 V T (v − u),
the optimal choice of δ is δ = 12V V T (v − u), and εf (u, v) = 12‖V T (v − u)‖2. Observe that this is the norm of a vector in Rn2 , i.e. the feature space, contrasting with our discussion in Section B. However, this is essentially the only case εf (u, v) simplifies into a feature space distance.
3.2 FULLY CONNECTED NEURAL NETWORKS WITH RELU ACTIVATIONS
For this feature extractor architecture, we have f = f (`) ◦ . . . ◦ f (1) where the layer i function is f (i) : Rni → Rni+1 , f (i)(z) = (k(i) + L(i)z)+. Here z+ represents the component-wise positive part of the vector z. Then εf (u, v) is the value of the following optimization problem:
min ε subject to δ ∈ εB, δ′ ∈ εB, (f (`) ◦ . . . ◦ f (1))(u+ δ) = (f (`) ◦ . . . ◦ f (1))(v + δ′). As in the linear case, the minimum exists because the cones are closed and the equality constraint is feasible. In contrast with the linear case, the equality constraint is nonconvex.
The local linear approximation to f (i)(z) around a point z′ is diag(s(i,z ′))(k(i)+L(i)z), where s(i,z ′) is a zero-one vector that depends on the sign pattern of k(i) +L(i)z′: s(i,z ′)
j = 1((k (i) +L(i)z′)j >
0). In other words, s(i,z ′) is the ReLU activation pattern at layer i when z′ is the input to that layer.
Using these linear approximations, the feasible set of (δ, δ′) ∈ Rn1+n1 satisfying the constraint f(u + δ) = f(v + δ′) can be decomposed as a union of polytopes: each activation pattern defines a linear subspace and there are some linear inequalities specifying the region where that activation pattern actually occurs. For a one-layer network, the linear piece for pattern s is
f(x) = diag(s)(k + Lz) for diag(2s− 1)(k + Lz) ≥ 0. Thus one of the polytopes composing the feasible set is (δ, δ′) ∈ Rn1+n1 satisfying
diag(s)(k + L(u+ δ)) = diag(s′)(k + L(v + δ′)),
(2 diag(s)− I)(k + L(u+ δ)) ≥ 0, (2 diag(s′)− I)(k + L(v + δ′)) ≥ 0.
In the one-layer case, the whole feasible region is covered by polytopes where s = s′. Observe that the dimension of the subspace satisfying the linear equality varies with s. When f contains multiple layers, each polytope in the feasible set is defined by a linear equality constraint involving feature vectors together with a linear inequality for the sign of each ReLU input.
We optimize over this feasible set with a descent algorithm: Algorithm 2 details the version for single-layer networks. The method generalizes to multiple layers and is used to obtain our results in Section 4.2). A full description of the general version is in Appendix C. We initialize our search in a polytope that we know to be nonempty because it contains the feature space collision induced by the midpoint of the two examples. Within each polytope, we can minimize the objective exactly by solving a linear cone program. We examine the dual variables associated with the linear inequality constraints to determine whether an adjacent polytope exists in which we can continue the search.
In the one layer case, the cone program that we solve, which depends on (L, k, u, v, s), is
min ε subject to (ε, δ, δ′) ∈ R1+n1+n1 , δ ∈ εB, δ′ ∈ εB, diag(s)L(u+ δ) = diag(s)L(v + δ′),
(2 diag(s)− I)(k + L(u+ δ)) ≥ 0, (2 diag(s)− I)(k + L(v + δ′)) ≥ 0.
Algorithm 1 Descent with midpoint initialization Input: u, v ∈ Rn1 , L ∈ Rn2×n1 , k ∈ Rn2 Output: ε ∈ R, δ, δ′ ∈ Rn1
1: z ← k + 12L(u+ v), ε← 1 2‖u− v‖ 2: for 0 ≤ j < n2 do 3: sj ← 1(zj > 0), 4: end for 5: repeat 6: εold ← ε 7: (ε, δ, δ′, z, z′)← ConeLP(L, k, u, v, s) 8: sold ← s 9: for 0 ≤ j < n2 do
10: if zj > 0 and z′j > 0 then 11: sj ← 1− sj 12: end if 13: end for 14: until sold = s or εold = ε
We need not just the values of the primal solution, but the values of dual variables associated with the linear inequalities in the solution to the dual problem. These are called z and z′ in Algorithm 2. When both zj > 0 and z′j > 0, the input to ReLU j is zero when both f(u+δ) and f(v+δ′) are computed, and the objective function could be decreased further by allowing the input to switch signs. In this case, we move to another polytope based on the new ReLU states and search there.
When we fail to find such pairs of dual variables, either the minimum is in the interior of the current polytope (and thus is supported only by cone constraints), or the minimum is on the boundary of the current polytope but there is no adjacent polytope in which we could continue the search. Thus we end the search.
Algorithm termination, convergence, and complexity: The descent algorithm will terminate in a finite number of iterations and will find a local minimum of the distance function. Since the feasible space is non-convex, any local descent procedure is not guaranteed to find the global minimum. The number of variables in the cone program is proportional to the number of ReLUs in the feature extractor plus the dimension of the example space. The time-complexity of a single iteration of the search is polynomial in the input dimension and number of ReLUs. The feasible region is the union of a finite but exponentially large number of convex polytopes. Due to the requirement that each iteration make progress, it is impossible to ever revisit a polytope. Thus, termination is guaranteed, but no polynomial bound on the number of iterations is available. We suspect that as in the case of the simplex algorithm for linear programming, input data resulting in an extremely large number of iterations may exist, but would have a delicate structure that is unlikely to arise in practice.
3.3 FURTHER COMMON LAYERS
Convolution: It is straightforward to represent a convolutional layer as a linear layer by constructing the appropriate Topelitz matrix, and thus our method applies directly.
Batch-normalization: Batch norm layers are simply affine functions of their inputs at test-time, and the transformations they induce can be easily included in a linear layer. Our method thus applies.
Max pooling: Max-pool layers, like ReLUs, are piecewise linear functions of their inputs, so constraints coming from the equality of max-pool outputs also lead to feasible regions that are the union of polytopes. This is a simple extension left for future work due to a lack of space.
Other activation functions and architectures: Injective activation functions such as Leaky ReLU, ELU and sigmoid will not lead to additional collisions. Further, since they are not piecewise, a
different descent algorithm would be needed. We note that our framework cannot find collisions in networks with both forward and backward flows, such as those with attention.
4 EVALUATION
In this section, we demonstrate how the methodology outlined in the previous sections for determining the robustness of a fixed feature extractor can be used in practice. Our detailed results provide insights on how the overall robustness of a neural network evolves through its different layers and is impacted by the training procedure used.
4.1 FROM COLLISION-FINDING TO LOWER BOUNDS
We find lower bounds on robust loss over the training set by minimizing a loss function overGV,N,f .
Vertices V from training data: The vertex set V is a representation of the training data. We use 2-class problems derived from MNIST (LeCun & Cortes, 1998) or Fashion MNIST (Xiao et al., 2017) as in previous work (Pydi & Jog, 2020; Bhagoji et al., 2019; 2021).
Neighborhood function N : We use the common `2-norm ball constraint (Madry et al., 2018), in which the adversary’s strength is parametrized by the radius of the ball 2.
Fixed feature extractors f : We use the composition of the layers of convolutional and fullyconnected DNNs as our fixed feature extractors (details in Appendix E). The first layer of any of these networks (before a non-linear activation is applied) behaves as a linear feature extractor. Subsequent layers behave as non-linear feature extractors. These networks are trained using one of standard cross-entropy loss minimization or robust training with either PGD-based adversarial training (Madry et al., 2018) or the TRADES loss (Zhang et al., 2019).
Edge set E from collision finding: Algorithm 2 provides a greedy, iterative method to find collisions between feature representations of pairs of points from different classes. Each successful collision is an edge in the bipartite conflict graph. Each iteration of this algorithm can be cast as a convex program in the form of a linear cone (as detailed in Appendix D). We solve this linear cone program using CVXOPT (Andersen et al., 2013). Since we are working with an `2-norm constrained adversary, we can speed up computation by projecting the inputs onto the space spanned by the right singular vectors of the first linear layer. For linear convolutional layers, we cast the convolution operation as a matrix multiplication to enable the use of the closed form derived in Section 3.1. We also experimented with a modification of the Auto-PGD (Croce & Hein, 2020) algorithm to find collisions at deeper layers with fixed budgets as done in Engstrom et al. (2019). However, this was less effective and more expensive at finding collisions, so all further results use Algorithm 2.
Computing the lower bound from the conflict graph: The conflict graph determines the set of possible output probabilities for each vertex (training data point) for the optimal classifier. Minimizing the 0 − 1 and cross-entropy losses over this graph thus results in a lower bound over these losses since any classifier must incur a larger loss than the optimal classifier, by definition. We use the method from Bhagoji et al. (2021) over the conflict graphs we derive from deeper layer representations. Results in the main body are for the cross-entropy loss (others in Appendix F).
4.2 RESULTS
Interpreting lower bound values: The lower bound on the input space representations is computed by considering all possible classifiers and is thus the best possible, while bounds for other representations are computed by restricting the space of possible classifiers. Intuitively, the latter will always be larger than or equal to the former. The key metric to understanding the robustness of feature extractors is to check how much larger the corresponding lower bounds are. The magnitude of this difference measures the contribution of that layer to the overall network’s lack of robustness.
Robustness of linear layer representations: Using the procedure outlined in Section 3.1, we find collisions after the first linear layer and construct a conflict graph for several models (Fig. 2) for
2We are aware of the critiques of this constraint (Gilmer et al., 2018) and only use it to compare with previous work.
How does robustness evolve across layers? Having looked at the how robust linear layer representations are, it is pertinent to consider other representations derived from other layers deeper in the network. Of particular interest are post-ReLU representations for models with benign (Fig. 3) and robust training (Fig. 4). We find that the first ReLU activation layer contributes significantly to an increase in the lower bound for both benign and robustly trained networks. We observe that postReLU representations tend to be sparse, leading to an easier search problem and a larger number of collisions. The second linear layer, on the other hand, does not lead to much additional increase in the loss. This is a property of the particular architecture we use, and a smaller linear layer is likely to lead to larger increases in loss. Finally, the second set of ReLU activations does have a measurable impact on robustness, particularly for the benign network. The impact of layer width on robustness is discussed in Appendix F.1.
How does the parametrization of robust training impact layer-wise robustness? As expected, layers extracted from robustly trained network are more robust than their benign counterparts for corresponding values of . We find a significant difference between PGD-based adversarial training and TRADES in terms of the robustness of their first linear layers (Fig. 2), but this largely disappears by the second ReLU activation layer (Fig. 5). Interestingly, we observe a phenomenon where layers of a network robustly trained using higher values of train can be less robust than those using a lower value, when the value of at which evaluation is performed is less than the higher train.
Impact of linear convolutional layers on robustness: As discussed in Section 3.3, the first convolutional layer can be thought as simply a matrix multiplication, although the size of the resulting matrix can be large. We find that since the effective dimension of the resulting features is much larger than the input dimension for the datasets we consider, the linear convolutional layer does not lead to an increase in the lower bound (Fig. 11 in Appendix).
5 RELATED WORK AND DISCUSSION
5.1 RELATED WORK
Lower bounds on robustness: For cases when the data distribution is specified, Dohmatob (2019) and Mahloujifar et al. (2019) use the ‘blowup’ property to determine bounds on the robust loss, given
Certification and Verification: Work on certified robustness has considered techniques for training neural networks such that the resulting models are provably robust to perturbations upper bounded by a given budget (Kolter & Wong, 2018; Raghunathan et al., 2018; Cohen et al., 2019; Li et al., 2020). Typically, these models can only be certified to be robust to small budgets. In contrast, our work provides lower bounds on the robustness of partial models which are applicable across a large range of budgets. Approaches to verifying the robustness of neural networks (Bunel et al., 2017; Tjeng et al., 2019; Gowal et al., 2018) are closely related to our work, but differ in that they consider fixed end-to-end networks while we focus on a layer-by-layer analysis, allowing us to argue about the robustness of classifiers trained on top of given feature extractors.
5.2 DISCUSSION
Implications for training robust models: Our results indicate that layers in the network that reduce the effective dimension of their incoming inputs have the largest negative impact on robustness (see further results in Appendix F.1). Two prominent examples of this are ReLU layers that reduce effective dimension by only considering non-negative outputs and fully connected layers with fewer outputs than inputs. On the other hand, linear convolutional layers do not have a negative impact on robustness. This indicates that not reducing the effective dimension of any deeper feature to be lower than that of the input data is likely to benefit robustness. Further, our results confirm that the use of larger values of train does not necessarily translate to higher robustness at lower budgets. This points towards the need for algorithms that can effectively train a network to be robust to more than a single adversary at a time. Finally, we find a qualitative difference in the layers learned using PGD-training and TRADES, implying interesting learning dynamics with different robust losses.
Extending to state-of-the-art models and datasets: All of our experiments in this paper are on simple models and datasets which demonstrate the feasibility and use of our method. However, state-of-the art feature extractors for datasets such as Imagenet are far deeper than those considered in this paper. Thus, our algorithm would need to be made considerably faster to handle these cases. While our framework, can handle skip connections, networks with attention are beyond its scope. Nevertheless, the feature extractors we consider are robust for the tasks we evaluate them on, making our results and conclusions representative.
A PROOFS
Proof of Lemma 1. Suppose that e = ((u, 1), (v,−1)) ∈ E . Then, there are some z̃ ∈ Z , ũ ∈ N(u), and ṽ ∈ N(v) such that f(ũ) = f(ṽ) = z̃. We have qu ≤ h(f(ũ))1 = h(z̃)1, qv ≤ h(f(ṽ))−1 = h(z̃)−1, and h(z̃)1 + h(z̃)−1 = 1. Combining these gives the constraint (Bq)e ≤ 1, which appears in (1).
Now, we will show that each vector q in the polytope is achievable by some h. The construction is simple: at each point in the feature space, assign the largest possible probability to class 1: let h(z̃)1 = supw:z̃∈f(N(w)) qw and h(z̃)−1 = 1 − h(z̃)1. This achieves the desired performance for examples from class 1:
inf ũ∈N(u) h(f(ũ))1 = inf ũ∈N(u) sup w:f(ũ)∈f(N(w)) qw ≥ inf ũ∈N(u) qu = qu.
For an example is v in class −1 we have
inf ṽ∈N(v) h(f(ṽ))−1 = inf ṽ∈N(v)
( 1− sup
w:f(ṽ)∈f(N(w)) qw ) = inf
ṽ∈N(v) inf w:f(ṽ)∈f(N(w)) (1− qw)
= inf w:((w,1),(v,−1))∈E
(1− qw)
≥ qv. The final inequality uses the fact that q satisfies Bq ≤ 1.
B INDUCED DISTANCE IS NOT A DISTANCE ON Z
The fact ε∗f is defined on X is essential. It is not possible to interpret the induced distance as a distance in the feature space Z . This is because f(Nε(x)), the set of features of points near x, cannot be derived from f(N0(x)), the set of features of the uncorrupted version of x. This is because distinct choices of x may lead to the same features f(N0(x)), but f may vary more in the neighborhood of one choice of x that the other.
An example is helpful to illustrate this point. Let X = X̃ = R2, let Z = R, let Nε(x) be the closed `2 ball of radius ε around x, and let f(x0, x1) = arctan(x1/x0) for x 6= 0 (the value that we pick for f(0, x1) is irrelevant). In other words, f finds the angle between the horizontal axis and the line containing x. The range of values that the adversary can cause f(x̃) to take depends on ‖x‖. If ‖x‖2 < ε, f(x̃) can be any angle from 0 to π, and if ‖x‖2 ≥ ε, |f(x̃)− f(x)| ≤ arcsin(ε/‖x‖2). This example also illustrates that εf is not necessarily a metric, even in cases where d is a metric. Given u, v ∈ R2, we have εf (u, αu) = 0, εf (u, αu) = 0, and εf (αu, αv) can be made arbitrarily small by selecting α to be small. Despite this, ε(u, v) will be on the order of min(‖u‖, ‖v‖). An alternative approach to studying the induced distance is to define an adversarial classification problem on Z by taking NZε (z) = f(Nε(f−1({z}))). Note that this construction only makes sense when X = X̃ and thus the domain of f is X . This is more conservative: for any feature point z it considers the worst-case x with feature z: the x in whose neighborhood f varies the most.
C DESCENT ALGORITHM FOR MULTIPLE LAYER NETWORKS
We start by repeating the notion used in Section 3.2: f = f (`) ◦ . . . ◦ f (1) where the layer i function is f (i) : Rni → Rni+1 , f (i)(z) = (k(i) + L(i)z)+. Then εf (u, v) is the value of the following optimization problem:
min ε subject to δ ∈ εB, δ′ ∈ εB, (f (`) ◦ . . . ◦ f (1))(u+ δ) = (f (`) ◦ . . . ◦ f (1))(v + δ′). Using the local linear approximation, the equality constraint becomes
diag(s(`))(k(i) + L(`)(. . . diag(s(1))(k(1) + L(1)(u+ δ))))
= diag(s′(`))(k(i) + L(`)(. . . diag(s′(1))(k(1) + L(1))(v + δ′)))).
and for each 1 ≤ i ≤ `, the inequalities
(2 diag(s(i))− I)(k(i) + L(i)(. . . diag(s(1))(k(1) + L(1)(u+ δ)))) ≥ 0 (2 diag(s′(i))− I)(k(i) + L(i)(. . . diag(s′(1))(k(1) + L(1)(v + δ′)))) ≥ 0
must hold in order for the linear approximation to be valid. Any collision must be in a polytope with s(`) = s′(`), which is why we only needed one set of s variables in the single layer case. However, ReLU activation variables for earlier layers cannot be merged.
The main modification to algorithm is to the process of changing the s variables after each iteration. The variables for all but the final layer are allowed to change independently, while the variables in the final layer are changed together following the same rule as in the single layer algorithm.
Algorithm 2 Descent with midpoint initialization Input: u, v ∈ Rn1 , L ∈ Rn2×n1 , k ∈ Rn2 Output: ε ∈ R, δ, δ′ ∈ Rn1
1: z ← k + 12L(u+ v), ε← 1 2‖u− v‖ 2: for 0 ≤ j < n2 do 3: sj ← 1(zj > 0), 4: end for 5: repeat 6: εold ← ε 7: (ε, δ, δ′, z, z′)← ConeLP(L(1) . . . L(`), k(1) . . . k(`), u, v, s(1) . . . s(`), s′(1) . . . s′(`−1)) 8: sold ← s 9: for 1 ≤ i ≤ `− 1 do
10: for 0 ≤ j < ni+1 do 11: if z(i)j > 0 then 12: s(i)j ← 1− s (i) j 13: end if 14: if z′(i)j > 0 then 15: s′(i)j ← 1− s ′(i) j 16: end if 17: end for 18: end for 19: for 0 ≤ j < n`+1 do 20: if z(`)j > 0 and z ′(`) j > 0 then 21: s(`)j ← 1− s (`) j 22: end if 23: end for 24: until sold = s or εold = ε
D CONVERTING COLLISION FINDING TO A LINEAR CONE PROGRAM
In this section, we demonstrate how the collision finding problem after one linear and one ReLU layer can be cast as a linear cone program.
We have a network with first layer v 7→ Lv + k where L ∈ Rn1×n0 and k ∈ Rn1 . Given a pair of points (v′, v′′) ∈ R2n0 we would like to search over the space of (δ′, δ′′) ∈ R2n0 such that (L(v′ + δ′)− k)+ = (L(v′′ + δ′′)− k)+. This space is always nonempty because we can take v′ + δ′ = v′′ + δ′′ = (v′ + v′′)/2.
Let S ⊆ [n1] be the subset of active ReLUs. Let F ∈ R|S|×n1 be the inclusion indicator matrix for the subset: Fi,j = 1 if and only if j ∈ S and |S ∩ [j]| = i (there are exactly i elements of S strictly smaller than j, so j is the i + 1st element of S). Let D be the diagonal matrix with Dj,j = 1 for j ∈ S and Dj,j = −1 for j 6∈ S.
For j ∈ S (the active ReLUs), we need the constraint (L(v′ + δ′) + k)j = (L(v′ + δ′) + k)j . In matrix form, this is FL(v′ + δ′) = FL(v′′ + δ′′),
For j ∈ S, we need (L(v′ + δ′) + k)j ≥ 0, (L(v′′ + δ′′) + k)j ≥ 0, and for j 6∈ S, we need (L(v′ + δ′) + k)j ≤ 0, (L(v′′ + δ′′) + k)j ≤ 0. In matrix form, these are D(L(v′ + δ′) + k) ≥ 0, D(L(v′′ + δ′′) + k) ≥ 0. Our objective is max(‖δ′‖2, ‖δ′′‖2). We will replace with a linear objective by adding two additional variables (t′, t′′) satisfying t′ ≥ ‖δ′‖2 and t′′ ≥ ‖δ′′‖2. The cvx-opt system is
minimize cTx subject to Gx+ s = h
Ax = b
s 0
The cones involved are a nonnegative orthant of dimension 2n1 and two second order cones, each of dimension n0 + 1. Thus s = (D(L(v′ + δ′) + k), D(L(v′′ + δ′′) + k), t′, δ′, t′′, δ′′). We then let x = (t, δ′, δ′′). The matrix relating s and x is G ∈ R(n1+n1+1+n0+1+n0)×(1+n0+n0) and s = h−Gx. The block structure is
G = 0 −DL 0 0 0 −DL −1 0 0 0 −I 0 −1 0 0 0 0 −I h = D(Lv′ + k) D(Lv′′ + k) 0 0 0 0 We use Ax = b to encode FL(δ′ − δ′′) = −FL(v′ − v′′), so A ∈ R|S|×(1+n0+n0), b ∈ R|S|, and
A = (0 FL −FL) b = −FL(v′ − v′′).
Finally,
c = ( 1 0 0 ) .
D.1 FACTORIZED L
If n1 < n0, then we can take advantage of the fact that the rank of L is at most n1. Let L = UΣV T .
Replace L(v + δ) with UΣ(V T v + ), n0 with size of Σ.
New G and A:
G = 0 −DUΣ 0 0 0 −DUΣ −1 0 0 0 −I 0 −1 0 0 0 0 −I A = (0 FUΣ −FUΣ) Also, the length of c is reduced. The constant vectors h and b are unchanged.
E FURTHER EXPERIMENTAL DETAILS
We consider the following two architectures for our experiments:
1. 3-layer fully connected neural network (FCNN): 300 FC-ReLU-200 FC-ReLU-2 FC 2. 4-layer convolutional neural network (CNN): 20×5×5 conv.-BN-ReLU- 2×2 MaxPool-
50×5×5 conv.-BN-ReLU- 2×2 MaxPool- 500 FC - 2 FC
We also construct a ‘Small’ and ‘Smaller’ version of the FCNN with layers that are 2× and 4× narrower respectively.
F ADDITIONAL RESULTS
F.1 IMPACT OF REDUCTION OF NETWORK SIZE
In Figure 6, we compare the robustness of representations obtained from fully connected networks with decreasing layer sizes. The ‘Regular’ network is the one used throughout, while the ‘Small’ and ‘Smaller’ networks have corresponding layers that are 2× and 4× narrower respectively. We can clearly see that as the width of the feature extractor decreases, so does its robustness.
F.2 0− 1 LOSS RESULTS
In Figures 7, 8, 9 and 10 we provide lower bounds on the 0 − 1 loss in the same settings as those considered in the main body of the paper. We note that the results and conclusions remain the same qualitatively.
F.3 LINEAR CONVOLUTIONAL LAYERS
In Figure 11, we can see that the representations extracted from the first linear layer of a convolutional network do not have any negative impact on the robustness of the overall model. | 1. What is the main contribution of the paper regarding robustness in neural networks?
2. What are the strengths of the proposed approach, particularly in its application to common neural architectures?
3. What are the weaknesses of the paper, especially in terms of the method's limitations and unclear notations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper's experimental results or implications for model training? | Summary Of The Paper
Review | Summary Of The Paper
This paper adapts previous work on the lower bounds of robustness from any possible measurable function to commonly used neural architectures for image classification. In order to do this, they create an algorithm based on linear programming to find collisions in feature representations at different layers in neural networks. Once the collisions are found, the minimum achievable loss by a classifier using these features is computed. Experiments are done on fully connected neural networks trained on MNIST and Fashion MNIST with standard and robust training methods. The authors find that adversarial training produces features closer to their predicted lower bound than networks trained with TRADES loss or with standard training.
Review
Strengths:
This paper expands on previous theory while making it directly applicable to neural networks that are commonly used.
They provide an algorithm that calculates lower bounds on robustness for real networks.
Experimental results on multiple datasets and robust training algorithms are provided, comparing how close the different methods are to their theoretical lower bounds.
Based on their experimental results, they discuss implications for model training. Specifically they note that ReLU seems to reduce the robustness of features in later layers of the network.
Weaknesses:
There is no discussion of other network architectures for which this method does not work. The method seems to work on the fact that ReLU layers reduce the dimensionality of the space that must be searched for collisions, so it would be interesting to hear about other activation functions for which that is not the case. The subsection "Other neural network architectures" only seems to cover networks where this method works. It also seems to require that the entire network (including activation functions) be expressible as a series of matrix multiplications, so I'm not sure of the applicability of this to architectures with, for example, attention.
Some notation and definitions are unclear or hard to follow. For example, in section 3: "We take the neighborhood of any point to be a scaled shifted version of this ball: N_\epsilon(x): n + \epsilon \mathcal{B}", but x is missing from the right hand side.
In subsection 3.2 "... satisfying the constraint f(u + \delta) = f(v + \delta)" seems like it should be "f(u + \delta) = f(v + \delta')". \epsilon seems to be a bit overloaded as well, where it is a function of two points in section in 2.1 and later a real number.
In equation 1 in lemma 1, since q is a vector, are the inequalities acting coordinate-wise on the vector? In that case, is q a vector in [0, 1]^\mathcal{V}? I have a similar question regarding the equation in definition 1, as q is also a vector in an inequality with a real number and another vector. |
ICLR | Title
Lower Bounds on the Robustness of Fixed Feature Extractors to Test-time Adversaries
Abstract
Understanding the robustness of machine learning models to adversarial examples generated by test-time adversaries is a problem of great interest. Recent theoretical work has derived lower bounds on how robust any model can be, when a data distribution and attacker constraints are specified. However, these bounds only apply to arbitrary classification functions and do not account for specific architectures and models used in practice, such as neural networks. In this paper, we develop a methodology to analyze the robustness of fixed feature extractors, which in turn provides bounds on the robustness of any classifier trained on top of it. The tightness of these bounds relies on the effectiveness of the method used to find collisions between pairs of perturbed examples at deeper layers. For linear feature extractors, we provide closed-form expressions for collision finding while for arbitrary feature extractors, we propose a bespoke algorithm based on the iterative solution of a convex program that provably finds collisions. We utilize our bounds to identify the layers of robustly trained models that contribute the most to a lack of robustness, as well as compare the same layer across different training methods to provide a quantitative comparison of their relative robustness.
1 INTRODUCTION
The robustness of machine learning models to test-time (evasion) attacks, particularly via adversarial examples, has been studied extensively from an empirical perspective (Croce et al., 2020; Papernot et al., 2016; Li et al., 2020; Biggio & Roli, 2017). A plethora of attacks (Szegedy et al., 2013; Carlini & Wagner, 2017; Bhagoji et al., 2018; Croce & Hein, 2020) and defenses (Madry et al., 2018; Zhang et al., 2019) has been proposed. The resulting arms race between proposed attacks and defenses, has led recent theoretical work (Dohmatob, 2019; Cullina et al., 2018; Mahloujifar et al., 2019; Bhagoji et al., 2019; 2021; Montasser et al., 2019) to focus on understanding the fundamental bounds of learning in the presence of adversarially perturbed inputs, in terms of both the existence of robust classifiers and the sample complexity required to learn them.
A line of research on this front has investigated lower bounds on the robustness attainable by any classifier when classifying data from distributions modified by adversarial perturbations (Pydi & Jog, 2020; Bhagoji et al., 2019; 2021). In particular, information-theoretic lower bounds on the minimum possible robust 0 − 1 and cross-entropy losses have been derived for arbitrary discrete distributions and very general attack models. However, the bounds from these papers are only applicable when considering the set of possible classifiers to be all measurable functions, which does not translate to practice. The classifier architecture and even some portion of the parameters are fixed (such as in transfer learning) in practical approaches to training machine learning models.
In this paper, we leverage this line of work to find lower bounds on the robustness of commonly used, fixed feature extractors. This provides a principled method to compare the robustness of feature extractors obtained by different training methods to arbitrary test-time adversaries. The key question we answer is:
What is the minimum robust loss incurred by any classifier trained on top of a fixed feature extractor?
Two challenges for training robust machine learning models motivate this question. First, since transfer learning is now common practice, it is incumbent upon the model trainer to choose a robust pre-trained feature extractor. Our work offers a principled and quantitative method to choose among
feature extractors obtained from different layers and training methods. Second, we are able to shed light on the impact of common architectural choices, such as activation functions and dimensionality reduction on robustness. This arises from our study of how the lower bound evolves from layer to layer within deep neural networks.
Importance of lower bounds on robustness: To determine classifier-agnostic lower bounds on robustness, earlier work focused on the interaction between points from different classes in the input space when perturbed through the construction of a conflict graph. Minimizing an appropriately defined loss function over this graph determines an information-theoretic lower bound on the loss. Intuitively, the denser the graph is, the higher the lower bound. These classifier-agnostic bounds are able to determine the plausibility of robust classification for a given adversary and dataset, and highlight practically relevant perturbation budget.
1.1 CONTRIBUTIONS
Lower bounds on robustness for fixed feature extractors: We significantly extend prior work by deriving lower bounds that depend on the architecture and weights of a fixed feature extractors. We construct a distance function over the input space that depends on the feature extractor in use. Our results have implications for both analyzing the robustness of already trained classifiers as well as determining feature extractors to use for applications such as transfer learning.
Bespoke algorithms for collision finding: For linear feature extractors such as the first layer of a neural network, we determine exact, closed form expressions to find collisions. Collision finding for non-linear feature extractors is a non-convex optimization problem, for which we propose a custom algorithm. We approach the collision-finding problem with a descent algorithm that solves a sequence of convex optimization problems over polytopes. This algorithm has the structural advantage of maintaining feasibility (for some adversarial budget constraint) at each step. Despite the fact that it does not find a global minimum, the solutions found are useful for obtaining not just approximations but actual lower bounds on optimal adversarial classification loss.
Empirical findings: We utilize our method to find numerical lower bounds on the robustness of fixed feature extractors trained on two different datasets with both fully-connected and convolutional architectures. Our results indicate that the use of dimensionality-reducing linear layers as well as the effective compression induced by ReLU activations lead to significantly less robust networks. Further, we confirm the oft-cited observation that mismatches between the adversary’s budget at train and test time impacts robustness negatively. Taken together, these findings point towards future design considerations for robust models.
2 DERIVING LOWER BOUNDS FOR FIXED FEATURE EXTRACTORS
In this section, we develop a method for evaluating the robustness of a feature extractor for classification in the presence of a test-time adversary. We characterize the optimal adversarial loss achievable by any classifier that uses the fixed feature extractor as its initial layer. This characterization is based on a conflict graph derived from a discrete data distribution and the constraints of the adversary.
Examples and labels: We consider a supervised classification problem with a test-time adversary. We have an example space X and a label space Y = {−1, 1}. Labeled examples are sampled from a joint probability distribution P over X × Y . Test-time adversary: The test-time adversary modifies a natural example subject to some constraints to generate an adversarial example that will be classified Goodfellow et al. (2015); Szegedy et al. (2013); Carlini & Wagner (2017). Formally, the adversary samples a labeled example (x, y) from P and selects x̃ ∈ N(x), where N : X → 2X̃ is the neighborhood function encoding the constraints on the adversary and X̃ is the space of adversarial examples.1 For all x, N(x) must be nonempty. This definition encompasses the `p family of constraints widely used in previous work.
1In most (but not all) settings that have previously been studied, X̃ = X . We believe that making the distinction helps to clarify some of our definitions: their applicability in this general context affects what properties they can be expected to have.
Measuring adversarial loss: We consider ‘soft’ classification functions (or classifiers) that map examples to probability distributions over the classes. These are h : X → ∆Y , where ∆Y = {p ∈ RY : p ≥ 0, ∑ y∈Y py = 1}. We measure classification performance with a loss function ` : ∆Y × Y → R, so the expected performance of a classifier h is E[supx̃∈N(x) `(h(x̃), y)], where (x, y) ∼ P . In the two class setting, h(x̃)−1 = 1−h(x̃)1, so any loss function that treats the classes symmetrically can be expressed as `(p, y) = `′(py). Additionally, `′ should be decreasing. Together, these allow the optimization over adversarial examples to be moved inside the loss function, giving E [ ` ( inf x̃∈N(x) h(x̃)y, y )] . Thus, the adversarial optimization can be analyzed by itself.
Definition 1. For a soft classifier h, the correct-classification probability qv that it achieves on an example v = (x, y) in the presence of an adversary is qv = inf x̃∈N(x) h(x̃)y . The space of achievable correct classification probabilities is
PV,N,H = ⋃ h∈H { q ∈ RV : ∀(x, y) ∈ V, 0 ≤ q(x,y) ≤ inf x̃∈N(x) h(x̃)y } .
Bhagoji et al. (2021) characterized PV,N,H in the case that H is the class of measurable functions X → ∆Y . For a data distribution P that is discrete with finite support V ⊆ X × Y , this allows the minimum adversarial loss achievable to be expressed as an optimization over PV,N,H:
inf h∈H
E(x,y)∼P [
sup x̃∈N(x)
`(h(x̃), y) ]
= inf q∈PV,N,H ∑ v∈V P ({v})`′(qv).
We extend their approach to analyze feature extractors as follows. Given a feature space Z and a fixed, measurable feature extractor f : X̃ → Z , define Hf = {h ∈ H : h = g ◦ f}: an element of Hf is some measureable g : Z → ∆Y composed with f . Our aim is to characterize PV,N,Hf so that we can optimize loss functions over it to evaluate the suitability of f .
Conflict graph: In Bhagoji et al. (2021), the notion of a conflict graph was used to record neighborhood intersections for pairs of points from different classes. When such an intersection exists, it is impossible for any classifier to correctly classify both of those points in the adversarial setting. We extend this notion to apply to the family of classifiers using a fixed feature extractor f . In our setting, a conflict exists between a pair of points when each of them has a neighbor that is mapped to the same point in the feature space.
We call the conflict graph GV,N,Hf , where V ⊆ X × Y . When we apply GV,N,f in order to understand classification of labeled examples with distribution P , we take V to be the support of P . The graph is bipartite: the vertex set is partitioned into parts Vc = V ∩ (X × {c}). The edge set is
EV,N,f = {((u, 1), (v,−1)) ∈ V1 × V−1 : ∃ũ ∈ N(u), ṽ ∈ N(v) such that f(ũ) = f(ṽ)}.
Lemma 1 (Feasible output probabilities). The set of correct classification probability vectors for support points V , adversarial constraint N , and hypothesis classHf is
PV,N,Hf = {q ∈ RV : q ≥ 0, q ≤ 1, Bq ≤ 1} (1)
where B ∈ RE×V is the edge incidence matrix of the conflict graph GV,N,f .
The proof is given in Appendix A and follows the structure of the proof in Bhagoji et al. (2021).
Approximating GV,N,f and PV,N,Hf : If instead of knowing the true conflict graph GV,N,f , we have some subgraph, then we can find a polytope that is a superset of the true PV,N,Hf . If we minimize some expected loss over this proxy polytope, we obtain a lower bound on the optimal loss over Hf . Because subgraphs of the conflict graph lead to valid lower bounds on optimal classification performance, we can use this method to evaluate the quality of a feature extractor f even if exact computation of the conflict graph is computationally intractable.
2.1 DISTANCE INTERPRETATION
The construction of a conflict graph and characterization of the feasible correct classification probabilities from the previous section apply to any feature extractor and neighborhood constraint. In
the most commonly studied settings, the neighborhood constraint arises from a distance function: Nε(x) = {x̃ ∈ X̃ : d(x, x̃) ≤ ε}. This parameter ε is the adversarial budget constraint. For any two examples u and v, a natural quantity to consider is the smallest adversarial budget that would cause the edge (u, v) be appear in the conflict graph:
ε∗(u, v) = inf{ε ≥ 0 : Nε(u) ∩Nε(v) 6= ∅} = inf x̃∈X̃ max(d(u, x̃), d(v, x̃)).
We will call this the distance on X induced by d. When X = X̃ = Rd and d(x, x′) = ‖x−x′‖p, the minimal adversarial budget is simply 12‖u − v‖p. Because the relationship to the distance used in the adversarial budget constraint is so simple, this quantity is not usually discussed independently. This definition generalizes easily to the setting of classification with a particular feature extractor, but the resulting quantity is much more interesting.
Definition 2. The distance induced on X by d and f is the minimum adversarial budget required to create a conflict between u and v after a feature extractor f :
ε∗f (u, v) = inf{ε ≥ 0 : f(Nε(u))∩f(Nε(v)) 6= ∅} = inf ũ,ṽ∈X̃ :f(ũ)=f(ṽ) max(d(u, ũ), d(v, ṽ)). (2)
This reduces to ε∗(u, v) when f is the identity function, or more generally any injective function. Observe that any choice of ũ and ṽ in (2) provide an upper bound on ε∗f (u, v), which is useful for finding a subgraph of GV,N,f and lower bounds on optimal classification performance inHf . Figure 1 illustrates the computation of ε∗f for a simple f that is related to the ReLU function.
Induced distance is not a distance on the feature space: The fact ε∗f is defined onX is essential: it is not possible to interpret it as a distance on Z . For a full explanation of this point, see Appendix B.
3 DISTANCE COMPUTATIONS FOR PRACTICAL FEATURE EXTRACTORS
Throughout the remainder of the paper, we will work in a more concrete setting. We assume that our example space is a real vector space and that adversarial examples are from the same space: X = X̃ = Rn1 . Let B ⊆ X be a nonempty, closed, convex, origin-symmetric set. These conditions imply the zero vector in contained in B. We take the neighborhood of any point to be a scaled, shifted version of this ball: Nε(x) : x + εB. Neighborhood constraints derived from `p norms fit into this class: we encode them by defining B = {δ ∈ Rn1 : ‖δ‖p ≤ 1}, which results in Nε(x) = {x̃ ∈ Rn1 : ‖x̃− x‖p ≤ ε}. We will focus on `2-based neighborhoods, but our algorithms could be adapted to any choice of B over which efficient optimization is possible.
3.1 LINEAR FEATURE EXTRACTORS
Suppose that our feature extractor is an affine linear function of the input example: f(x) = Lx+ k for some matrix L ∈ Rn2×n1 and vector k ∈ Rn2 . Then the distance εf (u, v) becomes
inf{ε ≥ 0 : {k + L(u+ δ) : δ ∈ εB} ∩ {k + L(u+ δ′) : δ′ ∈ εB} 6= ∅}.
Because {(ε, δ) ∈ R1+n1 : δ ∈ εB} is closed convex cone, εf (u, v) can be expressed as the following convex optimization problem:
inf ε subject to δ ∈ εB, δ′ ∈ εB, L(δ − δ′) = L(v − u). Because B is closed and the linear subspace is always nonempty, the infimum is achieved. Also, it is sufficient to consider (δ, δ′) satisfying δ′ = −δ: for any feasible (ε, δ, δ′), the point (ε, (δ − δ′)/2, (δ′ − δ)/2) is also feasible, has the same value, and satisfies the additional constraint. The feasibility of the symmetrized point uses the origin-symmetry of B. The simplified program is
min ε subject to δ ∈ εB, Lδ = L(v − u)/2. Thus the optimal adversarial strategy for creating conflict between u and v is intuitive: produce examples with the same features as the midpoint (u+ v)/2.
If B is the unit `2 ball, there are further simplifications. Consider the singular value decomposition L = UΣV T where we do not include zero singular values in Σ. Then the linear map given by UΣ is injective and can be canceled from the linear constraint on δ. The resulting program is
min ε subject to ‖δ‖2 ≤ ε, V T δ = 1
2 V T (v − u),
the optimal choice of δ is δ = 12V V T (v − u), and εf (u, v) = 12‖V T (v − u)‖2. Observe that this is the norm of a vector in Rn2 , i.e. the feature space, contrasting with our discussion in Section B. However, this is essentially the only case εf (u, v) simplifies into a feature space distance.
3.2 FULLY CONNECTED NEURAL NETWORKS WITH RELU ACTIVATIONS
For this feature extractor architecture, we have f = f (`) ◦ . . . ◦ f (1) where the layer i function is f (i) : Rni → Rni+1 , f (i)(z) = (k(i) + L(i)z)+. Here z+ represents the component-wise positive part of the vector z. Then εf (u, v) is the value of the following optimization problem:
min ε subject to δ ∈ εB, δ′ ∈ εB, (f (`) ◦ . . . ◦ f (1))(u+ δ) = (f (`) ◦ . . . ◦ f (1))(v + δ′). As in the linear case, the minimum exists because the cones are closed and the equality constraint is feasible. In contrast with the linear case, the equality constraint is nonconvex.
The local linear approximation to f (i)(z) around a point z′ is diag(s(i,z ′))(k(i)+L(i)z), where s(i,z ′) is a zero-one vector that depends on the sign pattern of k(i) +L(i)z′: s(i,z ′)
j = 1((k (i) +L(i)z′)j >
0). In other words, s(i,z ′) is the ReLU activation pattern at layer i when z′ is the input to that layer.
Using these linear approximations, the feasible set of (δ, δ′) ∈ Rn1+n1 satisfying the constraint f(u + δ) = f(v + δ′) can be decomposed as a union of polytopes: each activation pattern defines a linear subspace and there are some linear inequalities specifying the region where that activation pattern actually occurs. For a one-layer network, the linear piece for pattern s is
f(x) = diag(s)(k + Lz) for diag(2s− 1)(k + Lz) ≥ 0. Thus one of the polytopes composing the feasible set is (δ, δ′) ∈ Rn1+n1 satisfying
diag(s)(k + L(u+ δ)) = diag(s′)(k + L(v + δ′)),
(2 diag(s)− I)(k + L(u+ δ)) ≥ 0, (2 diag(s′)− I)(k + L(v + δ′)) ≥ 0.
In the one-layer case, the whole feasible region is covered by polytopes where s = s′. Observe that the dimension of the subspace satisfying the linear equality varies with s. When f contains multiple layers, each polytope in the feasible set is defined by a linear equality constraint involving feature vectors together with a linear inequality for the sign of each ReLU input.
We optimize over this feasible set with a descent algorithm: Algorithm 2 details the version for single-layer networks. The method generalizes to multiple layers and is used to obtain our results in Section 4.2). A full description of the general version is in Appendix C. We initialize our search in a polytope that we know to be nonempty because it contains the feature space collision induced by the midpoint of the two examples. Within each polytope, we can minimize the objective exactly by solving a linear cone program. We examine the dual variables associated with the linear inequality constraints to determine whether an adjacent polytope exists in which we can continue the search.
In the one layer case, the cone program that we solve, which depends on (L, k, u, v, s), is
min ε subject to (ε, δ, δ′) ∈ R1+n1+n1 , δ ∈ εB, δ′ ∈ εB, diag(s)L(u+ δ) = diag(s)L(v + δ′),
(2 diag(s)− I)(k + L(u+ δ)) ≥ 0, (2 diag(s)− I)(k + L(v + δ′)) ≥ 0.
Algorithm 1 Descent with midpoint initialization Input: u, v ∈ Rn1 , L ∈ Rn2×n1 , k ∈ Rn2 Output: ε ∈ R, δ, δ′ ∈ Rn1
1: z ← k + 12L(u+ v), ε← 1 2‖u− v‖ 2: for 0 ≤ j < n2 do 3: sj ← 1(zj > 0), 4: end for 5: repeat 6: εold ← ε 7: (ε, δ, δ′, z, z′)← ConeLP(L, k, u, v, s) 8: sold ← s 9: for 0 ≤ j < n2 do
10: if zj > 0 and z′j > 0 then 11: sj ← 1− sj 12: end if 13: end for 14: until sold = s or εold = ε
We need not just the values of the primal solution, but the values of dual variables associated with the linear inequalities in the solution to the dual problem. These are called z and z′ in Algorithm 2. When both zj > 0 and z′j > 0, the input to ReLU j is zero when both f(u+δ) and f(v+δ′) are computed, and the objective function could be decreased further by allowing the input to switch signs. In this case, we move to another polytope based on the new ReLU states and search there.
When we fail to find such pairs of dual variables, either the minimum is in the interior of the current polytope (and thus is supported only by cone constraints), or the minimum is on the boundary of the current polytope but there is no adjacent polytope in which we could continue the search. Thus we end the search.
Algorithm termination, convergence, and complexity: The descent algorithm will terminate in a finite number of iterations and will find a local minimum of the distance function. Since the feasible space is non-convex, any local descent procedure is not guaranteed to find the global minimum. The number of variables in the cone program is proportional to the number of ReLUs in the feature extractor plus the dimension of the example space. The time-complexity of a single iteration of the search is polynomial in the input dimension and number of ReLUs. The feasible region is the union of a finite but exponentially large number of convex polytopes. Due to the requirement that each iteration make progress, it is impossible to ever revisit a polytope. Thus, termination is guaranteed, but no polynomial bound on the number of iterations is available. We suspect that as in the case of the simplex algorithm for linear programming, input data resulting in an extremely large number of iterations may exist, but would have a delicate structure that is unlikely to arise in practice.
3.3 FURTHER COMMON LAYERS
Convolution: It is straightforward to represent a convolutional layer as a linear layer by constructing the appropriate Topelitz matrix, and thus our method applies directly.
Batch-normalization: Batch norm layers are simply affine functions of their inputs at test-time, and the transformations they induce can be easily included in a linear layer. Our method thus applies.
Max pooling: Max-pool layers, like ReLUs, are piecewise linear functions of their inputs, so constraints coming from the equality of max-pool outputs also lead to feasible regions that are the union of polytopes. This is a simple extension left for future work due to a lack of space.
Other activation functions and architectures: Injective activation functions such as Leaky ReLU, ELU and sigmoid will not lead to additional collisions. Further, since they are not piecewise, a
different descent algorithm would be needed. We note that our framework cannot find collisions in networks with both forward and backward flows, such as those with attention.
4 EVALUATION
In this section, we demonstrate how the methodology outlined in the previous sections for determining the robustness of a fixed feature extractor can be used in practice. Our detailed results provide insights on how the overall robustness of a neural network evolves through its different layers and is impacted by the training procedure used.
4.1 FROM COLLISION-FINDING TO LOWER BOUNDS
We find lower bounds on robust loss over the training set by minimizing a loss function overGV,N,f .
Vertices V from training data: The vertex set V is a representation of the training data. We use 2-class problems derived from MNIST (LeCun & Cortes, 1998) or Fashion MNIST (Xiao et al., 2017) as in previous work (Pydi & Jog, 2020; Bhagoji et al., 2019; 2021).
Neighborhood function N : We use the common `2-norm ball constraint (Madry et al., 2018), in which the adversary’s strength is parametrized by the radius of the ball 2.
Fixed feature extractors f : We use the composition of the layers of convolutional and fullyconnected DNNs as our fixed feature extractors (details in Appendix E). The first layer of any of these networks (before a non-linear activation is applied) behaves as a linear feature extractor. Subsequent layers behave as non-linear feature extractors. These networks are trained using one of standard cross-entropy loss minimization or robust training with either PGD-based adversarial training (Madry et al., 2018) or the TRADES loss (Zhang et al., 2019).
Edge set E from collision finding: Algorithm 2 provides a greedy, iterative method to find collisions between feature representations of pairs of points from different classes. Each successful collision is an edge in the bipartite conflict graph. Each iteration of this algorithm can be cast as a convex program in the form of a linear cone (as detailed in Appendix D). We solve this linear cone program using CVXOPT (Andersen et al., 2013). Since we are working with an `2-norm constrained adversary, we can speed up computation by projecting the inputs onto the space spanned by the right singular vectors of the first linear layer. For linear convolutional layers, we cast the convolution operation as a matrix multiplication to enable the use of the closed form derived in Section 3.1. We also experimented with a modification of the Auto-PGD (Croce & Hein, 2020) algorithm to find collisions at deeper layers with fixed budgets as done in Engstrom et al. (2019). However, this was less effective and more expensive at finding collisions, so all further results use Algorithm 2.
Computing the lower bound from the conflict graph: The conflict graph determines the set of possible output probabilities for each vertex (training data point) for the optimal classifier. Minimizing the 0 − 1 and cross-entropy losses over this graph thus results in a lower bound over these losses since any classifier must incur a larger loss than the optimal classifier, by definition. We use the method from Bhagoji et al. (2021) over the conflict graphs we derive from deeper layer representations. Results in the main body are for the cross-entropy loss (others in Appendix F).
4.2 RESULTS
Interpreting lower bound values: The lower bound on the input space representations is computed by considering all possible classifiers and is thus the best possible, while bounds for other representations are computed by restricting the space of possible classifiers. Intuitively, the latter will always be larger than or equal to the former. The key metric to understanding the robustness of feature extractors is to check how much larger the corresponding lower bounds are. The magnitude of this difference measures the contribution of that layer to the overall network’s lack of robustness.
Robustness of linear layer representations: Using the procedure outlined in Section 3.1, we find collisions after the first linear layer and construct a conflict graph for several models (Fig. 2) for
2We are aware of the critiques of this constraint (Gilmer et al., 2018) and only use it to compare with previous work.
How does robustness evolve across layers? Having looked at the how robust linear layer representations are, it is pertinent to consider other representations derived from other layers deeper in the network. Of particular interest are post-ReLU representations for models with benign (Fig. 3) and robust training (Fig. 4). We find that the first ReLU activation layer contributes significantly to an increase in the lower bound for both benign and robustly trained networks. We observe that postReLU representations tend to be sparse, leading to an easier search problem and a larger number of collisions. The second linear layer, on the other hand, does not lead to much additional increase in the loss. This is a property of the particular architecture we use, and a smaller linear layer is likely to lead to larger increases in loss. Finally, the second set of ReLU activations does have a measurable impact on robustness, particularly for the benign network. The impact of layer width on robustness is discussed in Appendix F.1.
How does the parametrization of robust training impact layer-wise robustness? As expected, layers extracted from robustly trained network are more robust than their benign counterparts for corresponding values of . We find a significant difference between PGD-based adversarial training and TRADES in terms of the robustness of their first linear layers (Fig. 2), but this largely disappears by the second ReLU activation layer (Fig. 5). Interestingly, we observe a phenomenon where layers of a network robustly trained using higher values of train can be less robust than those using a lower value, when the value of at which evaluation is performed is less than the higher train.
Impact of linear convolutional layers on robustness: As discussed in Section 3.3, the first convolutional layer can be thought as simply a matrix multiplication, although the size of the resulting matrix can be large. We find that since the effective dimension of the resulting features is much larger than the input dimension for the datasets we consider, the linear convolutional layer does not lead to an increase in the lower bound (Fig. 11 in Appendix).
5 RELATED WORK AND DISCUSSION
5.1 RELATED WORK
Lower bounds on robustness: For cases when the data distribution is specified, Dohmatob (2019) and Mahloujifar et al. (2019) use the ‘blowup’ property to determine bounds on the robust loss, given
Certification and Verification: Work on certified robustness has considered techniques for training neural networks such that the resulting models are provably robust to perturbations upper bounded by a given budget (Kolter & Wong, 2018; Raghunathan et al., 2018; Cohen et al., 2019; Li et al., 2020). Typically, these models can only be certified to be robust to small budgets. In contrast, our work provides lower bounds on the robustness of partial models which are applicable across a large range of budgets. Approaches to verifying the robustness of neural networks (Bunel et al., 2017; Tjeng et al., 2019; Gowal et al., 2018) are closely related to our work, but differ in that they consider fixed end-to-end networks while we focus on a layer-by-layer analysis, allowing us to argue about the robustness of classifiers trained on top of given feature extractors.
5.2 DISCUSSION
Implications for training robust models: Our results indicate that layers in the network that reduce the effective dimension of their incoming inputs have the largest negative impact on robustness (see further results in Appendix F.1). Two prominent examples of this are ReLU layers that reduce effective dimension by only considering non-negative outputs and fully connected layers with fewer outputs than inputs. On the other hand, linear convolutional layers do not have a negative impact on robustness. This indicates that not reducing the effective dimension of any deeper feature to be lower than that of the input data is likely to benefit robustness. Further, our results confirm that the use of larger values of train does not necessarily translate to higher robustness at lower budgets. This points towards the need for algorithms that can effectively train a network to be robust to more than a single adversary at a time. Finally, we find a qualitative difference in the layers learned using PGD-training and TRADES, implying interesting learning dynamics with different robust losses.
Extending to state-of-the-art models and datasets: All of our experiments in this paper are on simple models and datasets which demonstrate the feasibility and use of our method. However, state-of-the art feature extractors for datasets such as Imagenet are far deeper than those considered in this paper. Thus, our algorithm would need to be made considerably faster to handle these cases. While our framework, can handle skip connections, networks with attention are beyond its scope. Nevertheless, the feature extractors we consider are robust for the tasks we evaluate them on, making our results and conclusions representative.
A PROOFS
Proof of Lemma 1. Suppose that e = ((u, 1), (v,−1)) ∈ E . Then, there are some z̃ ∈ Z , ũ ∈ N(u), and ṽ ∈ N(v) such that f(ũ) = f(ṽ) = z̃. We have qu ≤ h(f(ũ))1 = h(z̃)1, qv ≤ h(f(ṽ))−1 = h(z̃)−1, and h(z̃)1 + h(z̃)−1 = 1. Combining these gives the constraint (Bq)e ≤ 1, which appears in (1).
Now, we will show that each vector q in the polytope is achievable by some h. The construction is simple: at each point in the feature space, assign the largest possible probability to class 1: let h(z̃)1 = supw:z̃∈f(N(w)) qw and h(z̃)−1 = 1 − h(z̃)1. This achieves the desired performance for examples from class 1:
inf ũ∈N(u) h(f(ũ))1 = inf ũ∈N(u) sup w:f(ũ)∈f(N(w)) qw ≥ inf ũ∈N(u) qu = qu.
For an example is v in class −1 we have
inf ṽ∈N(v) h(f(ṽ))−1 = inf ṽ∈N(v)
( 1− sup
w:f(ṽ)∈f(N(w)) qw ) = inf
ṽ∈N(v) inf w:f(ṽ)∈f(N(w)) (1− qw)
= inf w:((w,1),(v,−1))∈E
(1− qw)
≥ qv. The final inequality uses the fact that q satisfies Bq ≤ 1.
B INDUCED DISTANCE IS NOT A DISTANCE ON Z
The fact ε∗f is defined on X is essential. It is not possible to interpret the induced distance as a distance in the feature space Z . This is because f(Nε(x)), the set of features of points near x, cannot be derived from f(N0(x)), the set of features of the uncorrupted version of x. This is because distinct choices of x may lead to the same features f(N0(x)), but f may vary more in the neighborhood of one choice of x that the other.
An example is helpful to illustrate this point. Let X = X̃ = R2, let Z = R, let Nε(x) be the closed `2 ball of radius ε around x, and let f(x0, x1) = arctan(x1/x0) for x 6= 0 (the value that we pick for f(0, x1) is irrelevant). In other words, f finds the angle between the horizontal axis and the line containing x. The range of values that the adversary can cause f(x̃) to take depends on ‖x‖. If ‖x‖2 < ε, f(x̃) can be any angle from 0 to π, and if ‖x‖2 ≥ ε, |f(x̃)− f(x)| ≤ arcsin(ε/‖x‖2). This example also illustrates that εf is not necessarily a metric, even in cases where d is a metric. Given u, v ∈ R2, we have εf (u, αu) = 0, εf (u, αu) = 0, and εf (αu, αv) can be made arbitrarily small by selecting α to be small. Despite this, ε(u, v) will be on the order of min(‖u‖, ‖v‖). An alternative approach to studying the induced distance is to define an adversarial classification problem on Z by taking NZε (z) = f(Nε(f−1({z}))). Note that this construction only makes sense when X = X̃ and thus the domain of f is X . This is more conservative: for any feature point z it considers the worst-case x with feature z: the x in whose neighborhood f varies the most.
C DESCENT ALGORITHM FOR MULTIPLE LAYER NETWORKS
We start by repeating the notion used in Section 3.2: f = f (`) ◦ . . . ◦ f (1) where the layer i function is f (i) : Rni → Rni+1 , f (i)(z) = (k(i) + L(i)z)+. Then εf (u, v) is the value of the following optimization problem:
min ε subject to δ ∈ εB, δ′ ∈ εB, (f (`) ◦ . . . ◦ f (1))(u+ δ) = (f (`) ◦ . . . ◦ f (1))(v + δ′). Using the local linear approximation, the equality constraint becomes
diag(s(`))(k(i) + L(`)(. . . diag(s(1))(k(1) + L(1)(u+ δ))))
= diag(s′(`))(k(i) + L(`)(. . . diag(s′(1))(k(1) + L(1))(v + δ′)))).
and for each 1 ≤ i ≤ `, the inequalities
(2 diag(s(i))− I)(k(i) + L(i)(. . . diag(s(1))(k(1) + L(1)(u+ δ)))) ≥ 0 (2 diag(s′(i))− I)(k(i) + L(i)(. . . diag(s′(1))(k(1) + L(1)(v + δ′)))) ≥ 0
must hold in order for the linear approximation to be valid. Any collision must be in a polytope with s(`) = s′(`), which is why we only needed one set of s variables in the single layer case. However, ReLU activation variables for earlier layers cannot be merged.
The main modification to algorithm is to the process of changing the s variables after each iteration. The variables for all but the final layer are allowed to change independently, while the variables in the final layer are changed together following the same rule as in the single layer algorithm.
Algorithm 2 Descent with midpoint initialization Input: u, v ∈ Rn1 , L ∈ Rn2×n1 , k ∈ Rn2 Output: ε ∈ R, δ, δ′ ∈ Rn1
1: z ← k + 12L(u+ v), ε← 1 2‖u− v‖ 2: for 0 ≤ j < n2 do 3: sj ← 1(zj > 0), 4: end for 5: repeat 6: εold ← ε 7: (ε, δ, δ′, z, z′)← ConeLP(L(1) . . . L(`), k(1) . . . k(`), u, v, s(1) . . . s(`), s′(1) . . . s′(`−1)) 8: sold ← s 9: for 1 ≤ i ≤ `− 1 do
10: for 0 ≤ j < ni+1 do 11: if z(i)j > 0 then 12: s(i)j ← 1− s (i) j 13: end if 14: if z′(i)j > 0 then 15: s′(i)j ← 1− s ′(i) j 16: end if 17: end for 18: end for 19: for 0 ≤ j < n`+1 do 20: if z(`)j > 0 and z ′(`) j > 0 then 21: s(`)j ← 1− s (`) j 22: end if 23: end for 24: until sold = s or εold = ε
D CONVERTING COLLISION FINDING TO A LINEAR CONE PROGRAM
In this section, we demonstrate how the collision finding problem after one linear and one ReLU layer can be cast as a linear cone program.
We have a network with first layer v 7→ Lv + k where L ∈ Rn1×n0 and k ∈ Rn1 . Given a pair of points (v′, v′′) ∈ R2n0 we would like to search over the space of (δ′, δ′′) ∈ R2n0 such that (L(v′ + δ′)− k)+ = (L(v′′ + δ′′)− k)+. This space is always nonempty because we can take v′ + δ′ = v′′ + δ′′ = (v′ + v′′)/2.
Let S ⊆ [n1] be the subset of active ReLUs. Let F ∈ R|S|×n1 be the inclusion indicator matrix for the subset: Fi,j = 1 if and only if j ∈ S and |S ∩ [j]| = i (there are exactly i elements of S strictly smaller than j, so j is the i + 1st element of S). Let D be the diagonal matrix with Dj,j = 1 for j ∈ S and Dj,j = −1 for j 6∈ S.
For j ∈ S (the active ReLUs), we need the constraint (L(v′ + δ′) + k)j = (L(v′ + δ′) + k)j . In matrix form, this is FL(v′ + δ′) = FL(v′′ + δ′′),
For j ∈ S, we need (L(v′ + δ′) + k)j ≥ 0, (L(v′′ + δ′′) + k)j ≥ 0, and for j 6∈ S, we need (L(v′ + δ′) + k)j ≤ 0, (L(v′′ + δ′′) + k)j ≤ 0. In matrix form, these are D(L(v′ + δ′) + k) ≥ 0, D(L(v′′ + δ′′) + k) ≥ 0. Our objective is max(‖δ′‖2, ‖δ′′‖2). We will replace with a linear objective by adding two additional variables (t′, t′′) satisfying t′ ≥ ‖δ′‖2 and t′′ ≥ ‖δ′′‖2. The cvx-opt system is
minimize cTx subject to Gx+ s = h
Ax = b
s 0
The cones involved are a nonnegative orthant of dimension 2n1 and two second order cones, each of dimension n0 + 1. Thus s = (D(L(v′ + δ′) + k), D(L(v′′ + δ′′) + k), t′, δ′, t′′, δ′′). We then let x = (t, δ′, δ′′). The matrix relating s and x is G ∈ R(n1+n1+1+n0+1+n0)×(1+n0+n0) and s = h−Gx. The block structure is
G = 0 −DL 0 0 0 −DL −1 0 0 0 −I 0 −1 0 0 0 0 −I h = D(Lv′ + k) D(Lv′′ + k) 0 0 0 0 We use Ax = b to encode FL(δ′ − δ′′) = −FL(v′ − v′′), so A ∈ R|S|×(1+n0+n0), b ∈ R|S|, and
A = (0 FL −FL) b = −FL(v′ − v′′).
Finally,
c = ( 1 0 0 ) .
D.1 FACTORIZED L
If n1 < n0, then we can take advantage of the fact that the rank of L is at most n1. Let L = UΣV T .
Replace L(v + δ) with UΣ(V T v + ), n0 with size of Σ.
New G and A:
G = 0 −DUΣ 0 0 0 −DUΣ −1 0 0 0 −I 0 −1 0 0 0 0 −I A = (0 FUΣ −FUΣ) Also, the length of c is reduced. The constant vectors h and b are unchanged.
E FURTHER EXPERIMENTAL DETAILS
We consider the following two architectures for our experiments:
1. 3-layer fully connected neural network (FCNN): 300 FC-ReLU-200 FC-ReLU-2 FC 2. 4-layer convolutional neural network (CNN): 20×5×5 conv.-BN-ReLU- 2×2 MaxPool-
50×5×5 conv.-BN-ReLU- 2×2 MaxPool- 500 FC - 2 FC
We also construct a ‘Small’ and ‘Smaller’ version of the FCNN with layers that are 2× and 4× narrower respectively.
F ADDITIONAL RESULTS
F.1 IMPACT OF REDUCTION OF NETWORK SIZE
In Figure 6, we compare the robustness of representations obtained from fully connected networks with decreasing layer sizes. The ‘Regular’ network is the one used throughout, while the ‘Small’ and ‘Smaller’ networks have corresponding layers that are 2× and 4× narrower respectively. We can clearly see that as the width of the feature extractor decreases, so does its robustness.
F.2 0− 1 LOSS RESULTS
In Figures 7, 8, 9 and 10 we provide lower bounds on the 0 − 1 loss in the same settings as those considered in the main body of the paper. We note that the results and conclusions remain the same qualitatively.
F.3 LINEAR CONVOLUTIONAL LAYERS
In Figure 11, we can see that the representations extracted from the first linear layer of a convolutional network do not have any negative impact on the robustness of the overall model. | 1. How does the paper contribute to understanding the lower bound of robustness in deep learning models?
2. What are the strengths of the proposed approach in deriving the robustness lower bound?
3. What are the weaknesses or challenges in understanding the theoretical aspects and algorithmic implementation of the method?
4. How does the paper account for the relationship between the robustness lower bound and the underlying data distribution?
5. Are there any concerns regarding the empirical evaluation of the robust entropy compared to its theoretical value?
6. Can you provide additional explanations or clarifications regarding the algorithmic procedure in Section 3.1 and its connection to the results presented in Section 4? | Summary Of The Paper
Review | Summary Of The Paper
This paper investigates the lower bound of robustness, i.e. the best robustness we can expect, with the added constraints of model architecture. It derives an algorithm of estimating the lower bound, and empirically shows how various types of layers -- such as linear, ReLU and convolutional layer -- can impact the lowerbound.
Review
Overall I like the idea of the paper: it gives more insights to how different model architecture can affect the robustness lower bound, and thus gives some guidance to how we design a robust system.
On the other hand, I have difficulty understanding the theoretical part and the algorithmic aspect of the paper. The paper is written in a rather theoretical manner. I hope the authors can comment on my following questions so that I can understand the details better.
Does the robustness lower bound depend on the underlying data distribution? Intuitively, the collision graph shows that whether two instances in the space can have the same representation after perturbation. However, it's possible that 1) u, v have collision but 2) u, v will never appear in natural samples. I don't see an explicit link to the input distribution, and therefore hope the authors to clarify what this robust entropy is measuring.
Are the robust entropy evaluated on the training dataset? What's the relation of this empirical value to the theoretical value over the distribution? I know that the full graph will only have more collision, but the probability of each node appearing can change too. Therefore, the relation is not immediately clear to me.
In Section 3.1, the algorithm solves an optimization problem using descent. My questions is, how does the outcome of this algorithm translate to the figures in Sec 4? Or rather, how is Figure 2 produced out of the outcome of Algorithm 1? (Sec 4 gives a pointer to Bhagoji et.al, but it would be good to summarize the key steps in this procedure.) |
ICLR | Title
How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks
Abstract
We study how neural networks trained by gradient descent extrapolate, i.e., what they learn outside the support of the training distribution. Previous works report mixed empirical results when extrapolating with neural networks: while feedforward neural networks, a.k.a. multilayer perceptrons (MLPs), do not extrapolate well in certain simple tasks, Graph Neural Networks (GNNs) – structured networks with MLP modules – have shown some success in more complex tasks. Working towards a theoretical explanation, we identify conditions under which MLPs and GNNs extrapolate well. First, we quantify the observation that ReLU MLPs quickly converge to linear functions along any direction from the origin, which implies that ReLU MLPs do not extrapolate most nonlinear functions. But, they can provably learn a linear target function when the training distribution is sufficiently “diverse”. Second, in connection to analyzing the successes and limitations of GNNs, these results suggest a hypothesis for which we provide theoretical and empirical evidence: the success of GNNs in extrapolating algorithmic tasks to new data (e.g., larger graphs or edge weights) relies on encoding task-specific non-linearities in the architecture or features. Our theoretical analysis builds on a connection of over-parameterized networks to the neural tangent kernel. Empirically, our theory holds across different training settings.
1 INTRODUCTION
Humans extrapolate well in many tasks. For example, we can apply arithmetics to arbitrarily large numbers. One may wonder whether a neural network can do the same and generalize to examples arbitrarily far from the training data (Lake et al., 2017). Curiously, previous works report mixed extrapolation results with neural networks. Early works demonstrate feedforward neural networks, a.k.a. multilayer perceptrons (MLPs), fail to extrapolate well when learning simple polynomial functions (Barnard & Wessels, 1992; Haley & Soloway, 1992). However, recent works show Graph Neural Networks (GNNs) (Scarselli et al., 2009), a class of structured networks with MLP building blocks, can generalize to graphs much larger than training graphs in challenging algorithmic tasks, such as predicting the time evolution of physical systems (Battaglia et al., 2016), learning graph algorithms (Velickovic et al., 2020), and solving mathematical equations (Lample & Charton, 2020).
To explain this puzzle, we formally study how neural networks trained by gradient descent (GD) extrapolate, i.e., what they learn outside the support of training distribution. We say a neural network extrapolates well if it learns a task outside the training distribution. At first glance, it may seem that neural networks can behave arbitrarily outside the training distribution since they have high capacity (Zhang et al., 2017) and are universal approximators (Cybenko, 1989; Funahashi, 1989; Hornik et al., 1989; Kurkova, 1992). However, neural networks are constrained by gradient descent training (Hardt et al., 2016; Soudry et al., 2018). In our analysis, we explicitly consider such implicit bias through the analogy of the training dynamics of over-parameterized neural networks and kernel regression via the neural tangent kernel (NTK) (Jacot et al., 2018).
Starting with feedforward networks, the simplest neural networks and building blocks of more complex architectures such as GNNs, we establish that the predictions of over-parameterized MLPs with ReLU activation trained by GD converge to linear functions along any direction from the origin. We prove a convergence rate for two-layer networks and empirically observe that convergence often occurs close to the training data (Figure 1), which suggests ReLU MLPs cannot extrapolate well for most nonlinear tasks. We emphasize that our results do not follow from the fact that ReLU networks have finitely many linear regions (Arora et al., 2018; Hanin & Rolnick, 2019; Hein et al., 2019). While having finitely many linear regions implies ReLU MLPs eventually become linear, it does not say whether MLPs will learn the correct target function close to the training distribution. In contrast, our results are non-asymptotic and quantify what kind of functions MLPs will learn close to the training distribution. Second, we identify a condition when MLPs extrapolate well: the task is linear and the geometry of the training distribution is sufficiently “diverse”. To our knowledge, our results are the first extrapolation results of this kind for feedforward neural networks.
We then relate our insights into feedforward neural networks to GNNs, to explain why GNNs extrapolate well in some algorithmic tasks. Prior works report successful extrapolation for tasks that can be solved by dynamic programming (DP) (Bellman, 1966), which has a computation structure aligned with GNNs (Xu et al., 2020). DP updates can often be decomposed into nonlinear and linear steps. Hence, we hypothesize that GNNs trained by GD can extrapolate well in a DP task, if we encode appropriate non-linearities in the architecture and input representation (Figure 2). Importantly, encoding non-linearities may be unnecessary for GNNs to interpolate, because the MLP modules can easily learn many nonlinear functions inside the training distribution (Cybenko, 1989; Hornik et al., 1989; Xu et al., 2020), but it is crucial for GNNs to extrapolate correctly. We prove this hypothesis for a simplified case using Graph NTK (Du et al., 2019b). Empirically, we validate the hypothesis on three DP tasks: max degree, shortest paths, and n-body problem. We show GNNs with appropriate architecture, input representation, and training distribution can predict well on graphs with unseen sizes, structures, edge weights, and node features. Our theory explains the empirical success in previous works and suggests their limitations: successful extrapolation relies on encoding task-specific non-linearities, which requires domain knowledge or extensive model search. From a broader standpoint, our insights go beyond GNNs and apply broadly to other neural networks.
To summarize, we study how neural networks extrapolate. First, ReLU MLPs trained by GD converge to linear functions along directions from the origin with a rate of O(1/t). Second, to explain why GNNs extrapolate well in some algorithmic tasks, we prove that ReLU MLPs can extrapolate well in linear tasks, leading to a hypothesis: a neural network can extrapolate well when appropriate nonlinearities are encoded into the architecture and features. We prove this hypothesis for a simplified case and provide empirical support for more general settings.
1.1 RELATED WORK
Early works show example tasks where MLPs do not extrapolate well, e.g. learning simple polynomials (Barnard & Wessels, 1992; Haley & Soloway, 1992). We instead show a general pattern of how ReLU MLPs extrapolate and identify conditions for MLPs to extrapolate well. More recent works study the implicit biases induced on MLPs by gradient descent, for both the NTK and mean field regimes (Bietti & Mairal, 2019; Chizat & Bach, 2018; Song et al., 2018). Related to our results, some works show MLP predictions converge to “simple” piecewise linear functions, e.g., with few linear regions (Hanin & Rolnick, 2019; Maennel et al., 2018; Savarese et al., 2019; Williams et al., 2019). Our work differs in that none of these works explicitly studies extrapolation, and some focus only on one-dimensional inputs. Recent works also show that in high-dimensional settings of the NTK regime, MLP is asymptotically at most a linear predictor in certain scaling limits (Ba et al., 2020; Ghorbani et al., 2019). We study a different setting (extrapolation), and our analysis is non-asymptotic in nature and does not rely on random matrix theory.
Prior works explore GNN extrapolation by testing on larger graphs (Battaglia et al., 2018; Santoro et al., 2018; Saxton et al., 2019; Velickovic et al., 2020). We are the first to theoretically study GNN extrapolation, and we complete the notion of extrapolation to include unseen features and structures.
2 PRELIMINARIES
We begin by introducing our setting. Let X be the domain of interest, e.g., vectors or graphs. The task is to learn an underlying function g : X → R with a training set {(xi, yi)}ni=1 ⊂ D, where yi = g(xi) and D is the support of training distribution. Previous works have extensively studied in-distribution generalization where the training and the test distributions are identical (Valiant, 1984; Vapnik, 2013); i.e., D = X . In contrast, extrapolation addresses predictions on a domain X that is larger than the support of the training distribution D. We will say a model extrapolates well if it has a small extrapolation error.
Definition 1. (Extrapolation error). Let f : X → R be a model trained on {(xi, yi)}ni=1 ⊂ D with underlying function g : X → R. Let P be a distribution over X \ D and let ` : R× R→ R be a loss function. We define the extrapolation error of f as Ex∼P [`(f(x), g(x))].
We focus on neural networks trained by gradient descent (GD) or its variants with squared loss. We study two network architectures: feedforward and graph neural networks.
Graph Neural Networks. GNNs are structured networks operating on graphs with MLP modules (Battaglia et al., 2018; Xu et al., 2019). Let G = (V,E) be a graph. Each node u ∈ V has a feature vector xu, and each edge (u, v) ∈ E has a feature vector w(u,v). GNNs recursively compute node representations h(k)u at iteration k (Gilmer et al., 2017; Xu et al., 2018). Initially, h (0) u = xu. For k = 1..K, GNNs update h(k)u by aggregating the neighbor representations. We can optionally compute a graph representation hG by aggregating the final node representations. That is,
h(k)u = ∑
v∈N (u)
MLP(k) ( h(k−1)u ,h (k−1) v ,w(v,u) ) , hG = MLP(K+1) (∑ u∈G h(K)u ) . (1)
The final output is the graph representation hG or final node representations h (K) u depending on the task. We refer to the neighbor aggregation step for h(k)u as aggregation and the pooling step in hG as readout. Previous works typically use sum-aggregation and sum-readout (Battaglia et al., 2018). Our results indicate why replacing them may help extrapolation (Section 4).
3 HOW FEEDFORWARD NEURAL NETWORKS EXTRAPOLATE
Feedforward networks are the simplest neural networks and building blocks of more complex architectures such as GNNs, so we first study how they extrapolate when trained by GD. Throughout the paper, we assume ReLU activation. Section 3.3 contains preliminary results for other activations.
3.1 LINEAR EXTRAPOLATION BEHAVIOR OF RELU MLPS
By architecture, ReLU networks learn piecewise linear functions, but what do these regions precisely look like outside the support of the training data? Figure 1 illustrates examples of how ReLU MLPs extrapolate when trained by GD on various nonlinear functions. These examples suggest that outside the training support, the predictions quickly become linear along directions from the origin. We systematically verify this pattern by linear regression on MLPs’ predictions: the coefficient of determination (R2) is always greater than 0.99 (Appendix C.2). That is, ReLU MLPs “linearize” almost immediately outside the training data range.
We formalize this observation using the implicit biases of neural networks trained by GD via the neural tangent kernel (NTK): optimization trajectories of over-parameterized networks trained by GD are equivalent to those of kernel regression with a specific neural tangent kernel, under a set of assumptions called the “NTK regime” (Jacot et al., 2018). We provide an informal definition here; for further details, we refer the readers to Jacot et al. (2018) and Appendix A.
Definition 2. (Informal) A neural network trained in the NTK regime is infinitely wide, randomly initialized with certain scaling, and trained by GD with infinitesimal steps.
Prior works analyze optimization and in-distribution generalization of over-parameterized neural networks via NTK (Allen-Zhu et al., 2019a;b; Arora et al., 2019a;b; Cao & Gu, 2019; Du et al., 2019c;a; Li & Liang, 2018; Nitanda & Suzuki, 2021). We instead analyze extrapolation.
Theorem 1 formalizes our observation from Figure 1: outside the training data range, along any direction tv from the origin, the prediction of a two-layer ReLU MLP quickly converges to a linear
function with rate O( 1t ). The linear coefficients βv and the constant terms in the convergence rate depend on the training data and direction v. The proof is in Appendix B.1. Theorem 1. (Linear extrapolation). Suppose we train a two-layer ReLU MLP f : Rd → R with squared loss in the NTK regime. For any direction v ∈ Rd, let x0 = tv. As t → ∞, f(x0 + hv)− f(x0)→ βv · h for any h > 0, where βv is a constant linear coefficient. Moreover, given > 0, for t = O( 1 ), we have |
f(x0+hv)−f(x0) h − βv| < .
ReLU networks have finitely many linear regions (Arora et al., 2018; Hanin & Rolnick, 2019), hence their predictions eventually become linear. In contrast, Theorem 1 is a more fine-grained analysis of how MLPs extrapolate and provides a convergence rate. While Theorem 1 assumes two-layer networks in the NTK regime, experiments confirm that the linear extrapolation behavior happens across networks with different depths, widths, learning rates, and batch sizes (Appendix C.1 and C.2). Our proof technique potentially also extends to deeper networks.
Theorem 1 implies which target functions a ReLU MLP may be able to match outside the training data: only functions that are almost-linear along the directions away from the origin. Indeed, Figure 4a shows ReLU MLPs do not extrapolate target functions such as x>Ax (quadratic), ∑d i=1 cos(2π ·x(i))
(cos), and ∑d i=1 √ x(i) (sqrt), where x(i) is the i-th dimension of x. With suitable hyperparameters, MLPs extrapolate the L1 norm correctly, which satisfies the directional linearity condition.
Figure 4a provides one more positive result: MLPs extrapolate linear target functions well, across many different hyperparameters. While learning linear functions may seem very limited at first, in Section 4 this insight will help explain extrapolation properties of GNNs in non-linear practical tasks. Before that, we first theoretically analyze when MLPs extrapolate well.
3.2 WHEN RELU MLPS PROVABLY EXTRAPOLATE WELL
Figure 4a shows that MLPs can extrapolate well when the target function is linear. However, this is not always true. In this section, we show that successful extrapolation depends on the geometry of training data. Intuitively, the training distribution must be “diverse” enough for correct extrapolation.
We provide two conditions that relate the geometry of the training data to extrapolation. Lemma 1 states that over-parameterized MLPs can learn a linear target function with only 2d examples. Lemma 1. Let g(x) = β>x be the target function for β ∈ Rd. Suppose {xi}ni=1 contains an orthogonal basis {x̂i}di=1 and {−x̂i}di=1. If we train a two-layer ReLU MLP f on {(xi, yi)}ni=1 with squared loss in the NTK regime, then f(x) = β>x for all x ∈ Rd.
Lemma 1 is mainly of theoretical interest, as the 2d examples need to be carefully chosen. Theorem 2 builds on Lemma 1 and identifies a more practical condition for successful extrapolation: if the support of the training distribution covers all directions (e.g., a hypercube that covers the origin), the MLP converges to a linear target function with sufficient training data. Theorem 2. (Conditions for extrapolation). Let g(x) = β>x be the target function for β ∈ Rd. Suppose {xi}ni=1 is sampled from a distribution whose support D contains a connected subset S, where for any non-zero w ∈ Rd, there exists k > 0 so that kw ∈ S. If we train a two-layer ReLU MLP f : Rd → R on {(xi, yi)}ni=1 with squared loss in the NTK regime, f(x) p−→ β>x as n→∞.
Experiments: geometry of training data affects extrapolation. The condition in Theorem 2 formalizes the intuition that the training distribution must be “diverse” for successful extrapolation, e.g., D includes all directions. Empirically, the extrapolation error is indeed small when the condition of Theorem 2 is satisfied (“all” in Figure 4b). In contrast, the extrapolation error is much larger when the training examples are restricted to only some directions (Figure 4b and Figure 3).
Relating to previous works, Theorem 2 suggests why spurious correlations may hurt extrapolation, complementing the causality arguments (Arjovsky et al., 2019; Peters et al., 2016; Rojas-Carulla et al., 2018). When the training data has spurious correlations, some combinations of features are missing; e.g., camels might only appear in deserts in an image collection. Therefore, the condition for Theorem 2 no longer holds, and the model may extrapolate incorrectly. Theorem 2 is also analogous to an identifiability condition for linear models, but stricter. We can uniquely identify a linear function if the training data has full (feature) rank. MLPs are more expressive, so identifying the linear target function requires additional constraints.
To summarize, we analyze how ReLU MLPs extrapolate and provide two insights: (1) MLPs cannot extrapolate most nonlinear tasks due to their linear extrapolation (Theorem 1); and (2) MLPs extrapolate well when the target function is linear, if the training distribution is “diverse” (Theorem 2). In the next section, these results help us understand how more complex networks extrapolate.
3.3 MLPS WITH OTHER ACTIVATION FUNCTIONS
Before moving on to GNNs, we complete the picture of MLPs with experiments on other activation functions: tanh σ(x) = tanh(x), cosine σ(x) = cos(x) (Lapedes & Farber, 1987; McCaughan, 1997; Sopena & Alquezar, 1994), and quadratic σ(x) = x2 (Du & Lee, 2018; Livni et al., 2014). Details are in Appendix C.4. MLPs extrapolate well when the activation and target function are similar; e.g., tanh activation extrapolates well when learning tanh, but not other functions (Figure 5). Moreover, each activation function has different limitations. To extrapolate the tanh function with tanh activation, the training data range has to be sufficiently wide. When learning a quadratic function with quadratic activation, only two-layer networks extrapolate well as more layers lead to higher-order polynomials. Cosine activations are hard to optimize for high-dimensional data, so we only consider one/two dimensional cosine target functions.
4 HOW GRAPH NEURAL NETWORKS EXTRAPOLATE
Above, we saw that extrapolation in nonlinear tasks is hard for MLPs. Despite this limitation, GNNs have been shown to extrapolate well in some nonlinear algorithmic tasks, such as intuitive physics (Battaglia et al., 2016; Janner et al., 2019), graph algorithms (Battaglia et al., 2018; Velickovic et al., 2020), and symbolic mathematics (Lample & Charton, 2020). To address this discrepancy, we build on our MLP results and study how GNNs trained by GD extrapolate.
4.1 HYPOTHESIS: LINEAR ALGORITHMIC ALIGNMENT HELPS EXTRAPOLATION
We start with an example: training GNNs to solve the shortest path problem. For this task, prior works observe that a modified GNN architecture with min-aggregation can generalize to graphs larger than those in the training set (Battaglia et al., 2018; Velickovic et al., 2020):
h(k)u = min v∈N (u)
MLP(k) ( h(k−1)u ,h (k−1) v ,w(v,u) ) . (2)
We first provide an intuitive explanation (Figure 2a). Shortest path can be solved by the Bellman-Ford (BF) algorithm (Bellman, 1958) with the following update:
d[k][u] = min v∈N (u)
d[k − 1][v] +w(v, u), (3)
where w(v, u) is the weight of edge (v, u), and d[k][u] is the shortest distance to node u within k steps. The two equations can be easily aligned: GNNs simulate the BF algorithm if its MLP modules learn a linear function d[k−1][v]+w(v, u). Since MLPs can extrapolate linear tasks, this “alignment” may explain why min-aggregation GNNs can extrapolate well in this task.
For comparison, we can reason why we would not expect GNNs with the more commonly used sum-aggregation (Eqn. 1) to extrapolate well in this task. With sum-aggregation, the MLP modules need to learn a nonlinear function to simulate the BF algorithm, but Theorem 1 suggests that they will not extrapolate most nonlinear functions outside the training support.
We can generalize the above intuition to other algorithmic tasks. Many tasks where GNNs extrapolate well can be solved by dynamic programming (DP) (Bellman, 1966), an algorithmic paradigm with a recursive structure similar to GNNs’ (Eqn. 1) (Xu et al., 2020).
Definition 3. Dynamic programming (DP) is a recursive procedure with updates
Answer[k][s] = DP-Update({Answer[k − 1][s′]} , s′ = 1...n), (4)
where Answer[k][s] is the solution to a sub-problem indexed by iteration k and state s, and DP-Update is a task-specific update function that solves the sub-problem based on the previous iteration.
From a broader standpoint, we hypothesize that: if we encode appropriate non-linearities into the model architecture and input representations so that the MLP modules only need to learn nearly linear steps, then the resulting neural network can extrapolate well.
Hypothesis 1. (Linear algorithmic alignment). Let f : X → R be the underlying function and N a neural network with m MLP modules. Suppose there exist m linear functions {gi}mi=1 so that by replacing N ’s MLP modules with gi’s, N simulates f . Given > 0, there exists {(xi, f(xi))}ni=1 ⊂ D ( X so that N trained on {(xi, f(xi))}ni=1 by GD with squared loss learns f̂ with ‖f̂ − f‖ < .
Our hypothesis builds on the algorithmic alignment framework of (Xu et al., 2020), which states that a neural network interpolates well if the modules are “aligned” to easy-to-learn (possibly nonlinear) functions. Successful extrapolation is harder: the modules need to align with linear functions.
Applications of linear algorithmic alignment. In general, linear algorithmic alignment is not restricted to GNNs and applies broadly to neural networks. To satisfy the condition, we can encode appropriate nonlinear operations in the architecture or input representation (Figure 2). Learning DP algorithms with GNNs is one example of encoding non-linearity in the architecture (Battaglia et al., 2018; Corso et al., 2020). Another example is to encode log-and-exp transforms in the architecture to help extrapolate multiplication in arithmetic tasks (Trask et al., 2018; Madsen & Johansen, 2020). Neural symbolic programs take a step further and encode a library of symbolic operations to help extrapolation (Johnson et al., 2017; Mao et al., 2019; Yi et al., 2018).
For some tasks, it may be easier to change the input representation (Figure 2b). Sometimes, we can decompose the target function f as f = g ◦ h into a feature embedding h and a “simpler” target function g that our model can extrapolate well. We can obtain h via specialized features or feature transforms using domain knowledge (Lample & Charton, 2020; Webb et al., 2020), or via representation learning (e.g., BERT) with unlabeled out-of-distribution data in X \ D (Chen et al., 2020; Devlin et al., 2019; Hu et al., 2020; Mikolov et al., 2013b; Peters et al., 2018). This brings a new perspective of how representations help extrapolation in various application areas. For example, in natural language processing, pretrained representations (Mikolov et al., 2013a; Wu & Dredze, 2019) and feature transformation using domain knowledge (Yuan et al., 2020; Zhang et al., 2019) help models generalize across languages, a special type of extrapolation. In quantitative finance, identifying the right “factors” or features is crucial for deep learning models as the financial markets may frequently be in extrapolation regimes (Banz, 1981; Fama & French, 1993; Ross, 1976).
Linear algorithmic alignment explains successful extrapolation in the literature and suggests that extrapolation is harder in general: encoding appropriate non-linearity often requires domain expertise or model search. Next, we provide theoretical and empirical support for our hypothesis.
4.2 THEORETICAL AND EMPIRICAL SUPPORT
We validate our hypothesis on three DP tasks: max degree, shortest path, and n-body problem, and prove the hypothesis for max degree. We highlight the role of graph structures in extrapolation.
Theoretical analysis. We start with a simple yet fundamental task: learning the max degree of a graph, a special case of DP with one iteration. As a corollary of Theorem 1, the commonly used sum-based GNN (Eqn. 1) cannot extrapolate well (proof in Appendix B.4).
Corollary 1. GNNs with sum-aggregation and sum-readout do not extrapolate well in Max Degree.
To achieve linear algorithmic alignment, we can encode the only non-linearity, the max function, in the readout. Theorem 3 confirms that a GNN with max-readout can extrapolate well in this task. Theorem 3. (Extrapolation with GNNs). Assume all nodes have the same feature. Let g and g′ be the max/min degree function, respectively. Let {(Gi, g(Gi)}ni=1 be the training set. If {(g(Gi), g′(Gi), g(Gi) · Nmaxi , g′(Gi) · Nmini )}ni=1 spans R4, where Nmaxi and Nmini are the number of nodes that have max/min degree on Gi, then one-layer max-readout GNNs trained on {(Gi, g(Gi))}ni=1 with squared loss in the NTK regime learn g.
Theorem 3 does not follow immediately from Theorem 2, because MLP modules in GNNs only receive indirect supervision. We analyze the Graph NTK (Du et al., 2019b) to prove Theorem 3 in Appendix B.5. While Theorem 3 assumes identical node features, we empirically observe similar results for both identical and non-identical features (Figure 16 in Appendix).
Interpretation of conditions. The condition in Theorem 3 is analogous to that in Theorem 2. Both theorems require diverse training data, measured by graph structure in Theorem 3 or directions in Theorem 2. In Theorem 3, the condition is violated if all training graphs have the same max or min node degrees, e.g., when training data are from one of the following families: path, C-regular graphs (regular graphs with degree C), cycle, and ladder.
Experiments: architectures that help extrapolation. We validate our theoretical analysis with two DP tasks: max degree and shortest path (details in Appendix C.5 and C.6). While previous works only test on graphs with different sizes (Battaglia et al., 2018; Velickovic et al., 2020), we also test on graphs with unseen structure, edge weights and node features. The results support our theory. For max degree, GNNs with max-readout are better than GNNs with sum-readout (Figure 6a), confirming Corollary 1 and Theorem 3. For shortest path, GNNs with min-readout and min-aggregation are better than GNNs with sum-readout (Figure 6a).
Experiments confirm the importance of training graphs structure (Figure 7). Interestingly, the two tasks favor different graph structure. For max degree, as Theorem 3 predicts, GNNs extrapolate well when trained on trees, complete graphs, expanders, and general graphs, and extrapolation errors are
higher when trained on 4-regular, cycles, or ladder graphs. For shortest path, extrapolation errors follow a U-shaped curve as we change the sparsity of training graphs (Figure 7b and Figure 18 in Appendix). Intuitively, models trained on sparse or dense graphs likely learn degenerative solutions.
Experiments: representations that help extrapolation. Finally, we show a good input representation helps extrapolation. We study the n-body problem (Battaglia et al., 2016; Watters et al., 2017) (Appendix C.7), that is, predicting the time evolution of n objects in a gravitational system. Following previous work, the input is a complete graph where the nodes are the objects (Battaglia et al., 2016). The node feature for u is the concatenation of the object’s mass mu, position x (t) u , and velocity v (t) u at time t. The edge features are set to zero. We train GNNs to predict the velocity of each object u at time t+ 1. The true velocity f(G;u) for object u is approximately
f(G;u) ≈ vtu + atu · dt, atu = C · ∑ v 6=u mv ‖xtu − xtv‖32 · ( xtv − xtu ) , (5)
where C is a constant. To learn f , the MLP modules need to learn a nonlinear function. Therefore, GNNs do not extrapolate well to unseen masses or distances (“original features” in Figure 6b). We instead use an improved representation h(G) to encode non-linearity. At time t, we transform the edge features of (u, v) from zero to w(t)(u,v) = mv · ( x (t) v − x(t)u ) /‖x(t)u − x(t)v ‖32. The new edge features do not add information, but the MLP modules now only need to learn linear functions, which helps extrapolation (“improved features” in Figure 6b).
5 CONNECTIONS TO OTHER OUT-OF-DISTRIBUTION SETTINGS
We discuss several related settings. Intuitively, from the viewpoint of our results above, methods in related settings may improve extrapolation by 1) learning useful non-linearities beyond the training data range and 2) mapping relevant test data to the training data range.
Domain adaptation studies generalization to a specific target domain (Ben-David et al., 2010; Blitzer et al., 2008; Mansour et al., 2009). Typical strategies adjust the training process: for instance, use unlabeled samples from the target domain to align the target and source distributions (Ganin et al., 2016; Zhao et al., 2018). Using target domain data during training may induce useful non-linearities and may mitigate extrapolation by matching the target and source distributions, though the correctness of the learned mapping depends on the label distribution (Zhao et al., 2019).
Self-supervised learning on a large amount of unlabeled data can learn useful non-linearities beyond the labeled training data range (Chen et al., 2020; Devlin et al., 2019; He et al., 2020; Peters et al., 2018). Hence, our results suggest an explanation why pre-trained representations such as BERT improve out-of-distribution robustness (Hendrycks et al., 2020). In addition, self-supervised learning could map semantically similar data to similar representations, so some out-of-domain examples might fall inside the training distribution after the mapping.
Invariant models aim to learn features that respect specific invariances across multiple training distributions (Arjovsky et al., 2019; Rojas-Carulla et al., 2018; Zhou et al., 2021). If the model indeed learns these invariances, which can happen in the linear case and when there are confounders or anti-causal variables (Ahuja et al., 2021; Rosenfeld et al., 2021), this may essentially increase the training data range, since variations in the invariant features may be ignored by the model.
Distributional robustness considers small adversarial perturbations of the data distribution, and ensures that the model performs well under these (Goh & Sim, 2010; Sagawa et al., 2020; Sinha et al., 2018; Staib & Jegelka, 2019). We instead look at more global perturbations. Still, one would expect that modifications that help extrapolation in general also improve robustness to local perturbations.
6 CONCLUSION
This paper is an initial step towards formally understanding how neural networks trained by gradient descent extrapolate. We identify conditions under which MLPs and GNNs extrapolate as desired. We also suggest an explanation how GNNs have been able to extrapolate well in complex algorithmic tasks: encoding appropriate non-linearity in architecture and features can help extrapolation. Our results and hypothesis agree with empirical results, in this paper and in the literature.
ACKNOWLEDGMENTS
We thank Ruosong Wang, Tianle Cai, Han Zhao, Yuichi Yoshida, Takuya Konishi, Toru Lin, Weihua Hu, Matt J. Staib, Yichao Zhou, Denny Wu, Tianyi Yang, and Dingli (Leo) Yu for insightful discussions. This research was supported by NSF CAREER award 1553284, NSF III 1900933, and a Chevron-MIT Energy Fellowship. This research was also supported by JST ERATO JPMJER1201 and JSPS Kakenhi JP18H05291. MZ was supported by ODNI, IARPA, via the BETTER Program contract 2019-19051600005. The views, opinions, and/or findings contained in this article are those of the author and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the Department of Defense, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
A THEORETICAL BACKGROUND
In this section, we introduce theoretical background on neural tangent kernel (NTK), which draws an equivalence between the training dynamics of infinitely-wide (or ultra-wide) neural networks and that of kernel regression with respect to the neural tangent kernel.
Consider a general neural network f(θ,x) : X → R where θ ∈ Rm is the parameters in the network and x ∈ X is the input. Suppose we train the neural network by minimizing the squared loss over training data, `(θ) = 12 ∑n i=1(f(θ,xi)−yi)2, by gradient descent with infinitesimally small learning rate, i.e., dθ(t)dt = −∇`(θ(t)). Let u(t) = (f(θ(t),xi)) n i=1 be the network outputs. u(t) follows the dynamics
du(t)
dt = −H(t)(u(t)− y), (6)
whereH(t) is an n× n matrix whose (i, j)-th entry is
H(t)ij =
〈 ∂f(θ(t),xi)
∂θ , ∂f(θ(t),xj) ∂θ
〉 . (7)
A line of works show that for sufficiently wide networks,H(t) stays almost constant during training, i.e.,H(t) = H(0) in the limit (Arora et al., 2019a;b; Allen-Zhu et al., 2019a; Du et al., 2019c;a; Li & Liang, 2018; Jacot et al., 2018). Suppose network parameters are randomly initialized with certain scaling, as network width goes to infinity,H(0) converges to a fixed matrix, the neural tangent kernel (NTK) (Jacot et al., 2018):
NTK(x,x′) = E θ∼W
〈 ∂f(θ(t),x)
∂θ , ∂f(θ(t),x′) ∂θ
〉 , (8)
whereW is Gaussian. Therefore, the learning dynamics of sufficiently wide neural networks in this regime is equivalent to that of kernel gradient descent with respect to the NTK. This implies the function learned by a neural network at convergence on any specific training set, denoted by fNTK(x), can be precisely characterized, and is equivalent to the following kernel regression solution
fNTK(x) = (NTK(x,x1), ...,NTK(x,xn)) · NTK−1trainY , (9) where NTKtrain is the n × n kernel for training data, NTK(x,xi) is the kernel value between test data x and training data xi, and Y is the training labels.
We can in fact exactly calculate the neural tangent kernel matrix for certain architectures and activation functions. The exact formula of NTK with ReLU activation has been derived for feedforward neural networks (Jacot et al., 2018), convolutional neural networks (Arora et al., 2019b), and Graph Neural Networks (Du et al., 2019b).
Our theory builds upon this equivalence of network learning and kernel regression to more precisely characterize the function learned by a sufficiently-wide neural network given any specific training set. In particular, the difference between the learned function and true function over the domain of X determines the extrapolation error.
However, in general it is non-trivial to compute or analyze the functional form of what a neural network learns using Eqn. 9, because the kernel regression solution using neural tangent kernel only gives point-wise evaluation. Thus, we instead analyze the function learned by a network in the NTK’s induced feature space, because representations in the feature space would give a functional form.
Lemma 2 makes this connection more precise: the solution to the kernel regression using neural tangent kernel, which also equals over-parameterized network learning, is equivalent to a min-norm solution among functions in the NTK’s induced feature space that fits all training data. Here the min-norm refers to the RKHS norm. Lemma 2. Let φ(x) be a feature map induced by a neural tangent kernel, for any x ∈ Rd. The solution to kernel regression Eqn. 9 is equivalent to fNTK(x) = φ(x)>βNTK, where βNTK is
min β ‖β‖2
s.t. φ(xi)>β = yi, for i = 1, ..., n.
We prove Lemma 2 in Appendix B.6. To analyze the learned functions as the min-norm solution in feature space, we also need the explicit formula of an induced feature map of the corresponding neural tangent kernel. The following lemma gives a NTK feature space for two-layer MLPs with ReLU activation. It follows easily from the kernel formula described in Jacot et al. (2018); Arora et al. (2019b); Bietti & Mairal (2019). Lemma 3. An infinite-dimensional feature map φ(x) induced by the neural tangent kernel of a two-layer multi-layer perceptron with ReLU activation function is
φ (x) = c ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) , (10)
where w(k) ∼ N (0, I), with k going to infinity. c is a constant, and I is the indicator function.
We prove Lemma 3 in Appendix B.7. The feature maps for other architectures, e.g., Graph Neural Networks (GNNs) can be derived similarly. We analyze the Graph Neural Tangent Kernel (GNTK) for a simple GNN architecture in Theorem 3.
We then use Lemma 2 and 3 to characterize the properties of functions learned by an overparameterized neural network. We precisely characterize the neural networks’ learned functions in the NTK regime via solving the constrained optimization problem corresponding to the min-norm function in NTK feature space with the constraint of fitting the training data.
However, there still remains many technical challenges. For example, provable extrapolation (exact or asymptotic) is often not achieved with most training data distribution. Understanding the desirable condition requires significant insights into the geometry properties of training data distribution, and how they interact with the solution learned by neural networks. Our insights and refined analysis shows in Rd space, we need to consider the directions of training data. In graphs, we need to consider, in addition, the graph structure of training data. We refer readers to detailed proofs for the intuition of data conditions. Moreover, since NTK corresponds to infinitely wide neural networks, the feature space is of infinite dimension. The analysis of infinite dimensional spaces poses non-trivial technical challenges too.
Since different theorems have their respective challenges and insights/techniques, we refer the interested readers to the respective proofs for details. In Lemma 1 (proof in Appendix B.2), Theorem 2 (proof in Appendix B.3), and Theorem 1 (proof in Appendix B.1) we analyze over-parameterized MLPs. The proof of Corollary 1 is in Appendix B.4. In Theorem 3 we analyze Graph Neural Networks (proof in Appendix B.5).
B PROOFS
B.1 PROOF OF THEOREM 1
To show neural network outputs f(x) converge to a linear function along all directions v, we will analyze the function learned by a neural network on the training set {(xi, yi)}ni=1, by studying the functional representation in the network’s neural tangent kernel RKHS space.
Recall from Section A that in the NTK regime, i.e., networks are infinitely wide, randomly initialized, and trained by gradient descent with infinitesimally small learning rate, the learning dynamics of the neural network is equivalent to that of a kernel regression with respect to its neural tangent kernel.
For any x ∈ Rd, the network output is given by f(x) = (〈 φ(x), φ(x1) 〉 , ..., 〈 φ(x), φ(xn) 〉) · NTK−1trainY ,
where NTKtrain is the n× n kernel for training data, 〈 φ(x), φ(xi) 〉 is the kernel value between test data x and training data xi, and Y is training labels. By Lemma 2, the kernel regression solution is also equivalent to the min-norm solution in the NTK RKHS space that fits all training data
f(x) = φ(x)>βNTK, (11) where the representation coefficient βNTK is
min β ‖β‖2
s.t. φ(xi)>β = yi, for i = 1, ..., n.
The feature map φ(x) for a two-layer MLP with ReLU activation is given by Lemma 3 φ (x) = c′ ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) , (12)
where w(k) ∼ N (0, I), with k going to infinity. c′ is a constant, and I is the indicator function. Without loss of generality, we assume the bias term to be 1. For simplicity of notations, we denote each data x plus bias term by, i.e., x̂ = [x|1] (Bietti & Mairal, 2019), and assume constant term is 1. Given any direction v on the unit sphere, the network outputs for out-of-distribution data x0 = tv and x = x0 + hv = (1 + λ)x0, where we introduce the notation of x and λ for convenience, are given by Eqn. 11 and Eqn. 12
f(x̂0) =β > NTK ( x̂0 · I ( w(k) > x̂0 ≥ 0 ) ,w(k) > x̂0 · I ( w(k) > x̂0 ≥ 0 ) , ... ) ,
f(x̂) =β>NTK
( x̂ · I ( w(k) > x̂ ≥ 0 ) ,w(k) > x̂ · I ( w(k) > x̂ ≥ 0 ) , ... ) ,
where we have x̂0 = [x0|1] and x̂ = [(1 + λ)x0|1]. It follows that f(x̂)− f(x̂0) = β>NTK ( x̂ · I ( w(k) > x̂ ≥ 0 ) − x̂0 · I ( w(k) > x̂0 ≥ 0 ) , (13)
w(k) > x̂ · I ( w(k) > x̂ ≥ 0 ) −w(k) > x̂0 · I ( w(k) > x̂0 ≥ 0 ) , ... ) (14)
By re-arranging the terms, we get the following equivalent form of the entries: x̂ · I ( w>x̂ ≥ 0 ) − x̂0 · I ( w>x̂0 ≥ 0 ) (15)
= x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) + I ( w>x̂0 ≥ 0 )) − x̂0 · I ( w>x̂0 ≥ 0 ) (16)
= x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) + (x̂− x̂0) · I ( w>x̂0 ≥ 0 ) (17)
= [x|1] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) + [hv|0] · I ( w>x̂0 ≥ 0 ) (18)
Similarly, we have w>x̂ · I ( w>x̂ ≥ 0 ) −w>x̂0 · I ( w>x̂0 ≥ 0 ) (19)
= w>x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) + I ( w>x̂0 ≥ 0 )) −w>x̂0 · I ( w>x̂0 ≥ 0 ) (20)
= w>x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) +w> (x̂− x̂0) · I ( w>x̂0 ≥ 0 ) (21)
= w> [x|1] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) +w>[hv|0] · I ( w>x̂0 ≥ 0 ) (22)
Again, let us denote the part of βNTK corresponding to each w by βw. Moreover, let us denote the part corresponding to Eqn. 18 by β1w and the part corresponding to Eqn. 22 by β 2 w. Then we have
f(x̂)− f(x̂0) h
(23)
= ∫ β1 > w [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (24)
+ ∫ β1 > w [v|0] · I ( w>x̂0 ≥ 0 ) dP(w) (25)
+ ∫ β2w ·w> [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (26)
+ ∫ β2w ·w>[v|0] · I ( w>x̂0 ≥ 0 ) dP(w) (27)
Note that all βw are finite constants that depend on the training data. Next, we show that as t→∞, each of the terms above converges in O(1/ ) to some constant coefficient βv that depend on the training data and the direction v. Let us first consider Eqn. 25. We have∫
I ( w>x̂0 ≥ 0 ) dP(w) = ∫ I ( w>[x0|1] ≥ 0 ) dP(w) (28)
= ∫ I ( w>[x0/t|1/t] ≥ 0 ) dP(w) (29)
−→ ∫ I ( w>[v|0] ≥ 0 ) dP(w) as t→∞ (30)
Because β1w are finite constants, it follows that∫ β1 > w [v|0] · I ( w>x̂0 ≥ 0 ) dP(w)→ ∫ β1 > w [v|0] · I ( w>[v|0] ≥ 0 ) dP(w), (31)
where the right hand side is a constant that depends on training data and direction v. Next, we show the convergence rate for Eqn. 31. Given error > 0, because β1 >
w [v|0] are finite constants, we need to bound the following by C · for some constant C,
| ∫ I ( w>x̂0 ≥ 0 ) − I ( w>[v|0] ≥ 0 ) dP(w)| (32)
= | ∫ I ( w>[x0|1] ≥ 0 ) − I ( w>[x0|0] ≥ 0 ) dP(w)| (33)
Observe that the two terms in Eqn. 33 represent the volume of half-(balls) that are orthogonal to vectors [x0|1] and [x0|0]. Hence, Eqn. 33 is the volume of the non-overlapping part of the two (half)balls, which is created by rotating an angle θ along the last coordinate. By symmetry, Eqn. 33 is linear in θ. Moreover, the angle θ = arctan(C/t) for some constant C. Hence, it follows that
| ∫ I ( w>[x0|1] ≥ 0 ) − I ( w>[x0|0] ≥ 0 ) dP(w)| = C1 · arctan(C2/t) (34)
≤ C1 · C2/t (35) = O(1/t) (36)
In the last inequality, we used the fact that arctanx < x for x > 0. Hence, O(1/t) < implies t = O(1/ ) as desired. Next, we consider Eqn. 24.∫
β1 > w [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (37)
Let us first analyze the convergence of the following: | ∫ I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) dP(w)| (38)
= | ∫ I ( w>[(1 + λ)x0|1] ≥ 0 ) − I ( w>[x0|1] ≥ 0 ) dP(w)dP(w)| (39)
= | ∫ I ( w>[x0| 1
1 + λ ] ≥ 0
) − I ( w>[x0|1] ≥ 0 ) dP(w)dP(w)| → 0 (40)
The convergence to 0 follows from Eqn. 34. Now we consider the convergence rate. The angle θ is at most 1− 11+λ times of that in Eqn. 34. Hence, the rate is as follows(
1− 1 1 + λ
) ·O ( 1
t
) = λ 1 + λ ·O ( 1 t ) = h/t 1 + h/t ·O ( 1 t ) = O ( h (h+ t)t ) (41)
Now we get back to Eqn. 24, which simplifies as the following.∫ β1 >
w
[ v + tv
h | 1 h
] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (42)
We compare the rate of growth of left hand side and the rate of decrease of right hand side (indicators).
t h · h (h+ t)t = 1 h+ t → 0 as t→∞ (43)
1 h · h (h+ t)t =
1
(h+ t)t → 0 as t→∞ (44)
Hence, the indicators decrease faster, and it follows that Eqn. 24 converges to 0 with rate O( 1 ). Moreover, we can bound w with standard concentration techniques. Then the proofs for Eqn. 26 and Eqn. 27 follow similarly. This completes the proof.
B.2 PROOF OF LEMMA 1
Overview of proof. To prove exact extrapolation given the conditions on training data, we analyze the function learned by the neural network in a functional form. The network’s learned function can be precisely characterized by a solution in the network’s neural tangent kernel feature space which has a minimum RKHS norm among functions that can fit all training data, i.e., it corresponds to the optimum of a constrained optimization problem. We show that the global optimum of this constrained optimization problem, given the conditions on training data, is precisely the same function as the underlying true function.
Setup and preparation. LetX = {x1, ...,xn} and Y = {y1, ..., yn} denote the training set input features and their labels. Let βg ∈ Rd denote the true parameters/weights for the underlying linear function g, i.e.,
g(x) = β>g x for all x ∈ Rd
Recall from Section A that in the NTK regime, where networks are infinitely wide, randomly initialized, and trained by gradient descent with infinitesimally small learning rate, the learning dynamics of a neural network is equivalent to that of a kernel regression with respect to its neural tangent kernel. Moreover, Lemma 2 tells us that this kernel regression solution can be expressed in the functional form in the neural tangent kernel’s feature space. That is, the function learned by the neural network (in the ntk regime) can be precisely characterized as
f(x) = φ(x)>βNTK,
where the representation coefficient βNTK is
min β ‖β‖2 (45)
s.t. φ(xi)>β = yi, for i = 1, ..., n. (46)
An infinite-dimensional feature map φ(x) for a two-layer ReLU network is described in Lemma 3 φ (x) = c′ ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) ,
where w(k) ∼ N (0, I), with k going to infinity. c′ is a constant, and I is the indicator function. That is, there are infinitely many directionsw with Gaussian density, and each direction comes with two features. Without loss of generality, we can assume the scaling constant to be 1.
Constrained optimization in NTK feature space. The representation or weight of the neural network’s learned function in the neural tangent kernel feature space, βNTK, consists of weight vectors for each x · I ( w(k) > x ≥ 0 ) ∈ Rd and w(k)>x · I ( w(k) > x ≥ 0 ) ∈ R. For simplicity
of notation, we will use w to refer to a particular w, without considering the index (k), which does not matter for our purposes. For any w ∈ Rd, we denote by β̂w = (β̂(1)w , ..., β̂(d)w ) ∈ Rd the weight vectors corresponding to x · I ( w>x ≥ 0 ) , and denote by β̂′w ∈ Rd the weight for
w>x · I ( w>x ≥ 0 ) .
Observe that for any w ∼ N (0, I) ∈ Rd, any other vectors in the same direction will activate the same set of xi ∈ Rd. That is, if w>xi ≥ 0 for any w ∈ Rd, then (k ·w)>xi ≥ 0 for any k > 0. Hence, we can reload our notation to combine the effect of weights for w’s in the same direction. This enables simpler notations and allows us to change the distribution of w in NTK features from Gaussian distribution to uniform distribution on the unit sphere.
More precisely, we reload our notation by using βw and β′w to denote the combined effect of all weights (β̂(1)kw, ..., β̂ (d) kw) ∈ Rd and β̂′kw ∈ R for all kw with k > 0 in the same direction of w. That is, for each w ∼ Uni(unit sphere) ∈ Rd, we define β(j)w as the total effect of weights in the same direction
β(j)w = ∫ β̂(j)u I ( w>u
‖w‖ · ‖u‖ = 1
) dP(u), for j = [d] (47)
where u ∼ N (0, I). Note that to ensure the βw is a well-defined number, here we can work with the polar representation and integrate with respect to an angle. Then βw is well-defined. But for simplicity of exposition, we use the plain notation of integral. Similarly, we define β′w as reloading the notation of
β′w = ∫ β̂uI ( w>u
‖w‖ · ‖u‖ = 1 ) · ‖u‖ ‖w‖ dP(u) (48)
Here, in Eqn. 48 we have an extra term of ‖u‖‖w‖ compared to Eqn. 47 because the NTK features that Eqn. 48 corresponds to,w>x · I ( w>x ≥ 0 ) , has an extraw> term. So we need to take into account the scaling. This abstraction enables us to make claims on the high-level parameters βw and β′w only, which we will show to be sufficient to determine the learned function.
Then we can formulate the constrained optimization problem whose solution gives a functional form of the neural network’s learned function. We rewrite the min-norm solution in Eqn. 45 as
min β
∫ ( β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (49)
s.t. ∫
w>xi≥0
β>wxi + β ′ w ·w>xi dP(w) = β>g xi ∀i ∈ [n], (50)
where the density of w is now uniform on the unit sphere of Rd. Observe that since w is from a uniform distribution, the probability density function P(w) is a constant. This means every xi is activated by half of thew on the unit sphere, which implies we can now write the right hand side of Eqn. 50 in the form of left hand side, i.e., integral form. This allows us to further simplify Eqn. 50 as∫
w>xi≥0
( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n], (51)
where Eqn. 51 follows from the following steps of simplification∫ w>xi≥0 β(1)w x (1) i + ..β (d) w x (d) i + β ′ w ·w>xidP(w) = β(1)g x (1) i + ...β (d) g x (d) i ∀i ∈ [n],
⇐⇒ ∫
w>xi≥0
β(1)w x (1) i + ...+ β (d) w x (d) i + β ′ w ·w>xi dP(w)
= 1∫
w>xi≥0 dP(w)
· ∫
w>xi≥0
dP(w) · ( β(1)g x (1) i + ...+ β (d) g x (d) i ) ∀i ∈ [n],
⇐⇒ ∫
w>xi≥0
β(1)w x (1) i + ...+ β (d) w x (d) i + β ′ w ·w>xidP(w)
= 2 · ∫
w>xi≥0
β(1)g x (1) i + ...+ β (d) g x (d) i dP(w) ∀i ∈ [n],
⇐⇒ ∫
w>xi≥0
( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n].
Claim 1. Without loss of generality, assume the scaling factor c in NTK feature map φ(x) is 1. Then the global optimum to the constraint optimization problem Eqn. 49 subject to Eqn. 51, i.e.,
min β
∫ ( β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (52)
s.t. ∫
w>xi≥0
( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n]. (53)
satisfies βw + β′w ·w = 2βg for all w.
This claim implies the exact extrapolation we want to prove, i.e., fNTK(x) = g(x). This is because, if our claim holds, then for any x ∈ Rd
fNTK(x) = ∫ w>x≥0 β>wx+ β ′ w ·w>x dP(w)
= ∫ w>x≥0 2 · β>g x dP(w)
= ∫ w>x≥0 dP(w) · 2β>g x
= 1
2 · 2β>g x = g(x)
Thus, it remains to prove Claim 1. To compute the optimum to the constrained optimization problem Eqn. 52, we consider the Lagrange multipliers. It is clear that the objective Eqn. 52 is convex. Moreover, the constraint Eqn. 53 is affine. Hence, by KKT, solution that satisfies the Lagrange condition will be the global optimum. We compute the Lagrange multiplier as
L(β, λ) = ∫ (
β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (54)
− n∑ i=1 λi · ∫ w>xi≥0 ( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) (55) Setting the partial derivative of L(β, λ) with respect to each variable to zero gives
∂L ∂β (k) w = 2β(k)w P(w) + n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) = 0 (56)
∂L β′w = 2β′wP(w) + n∑ i=1 λi ·w>xi · I ( w>xi ≥ 0 ) = 0 (57) ∂L ∂λi = ∫ w>xi≥0 ( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 (58)
It is clear that the solution in Claim 1 immediately satisfies Eqn. 58. Hence, it remains to show there exist a set of λi for i ∈ [n] that satisfies Eqn. 56 and Eqn. 57. We can simplify Eqn. 56 as
β(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) , (59)
where c is a constant. Similarly, we can simplify Eqn. 57 as
β′w = c · n∑ i=1 λi ·w>xi · I ( w>xi ≥ 0 ) (60)
Observe that combining Eqn. 59 and Eqn. 60 implies that the constraint Eqn. 60 can be further simplified as
β′w = w >βw (61)
It remains to show that given the condition on training data, there exists a set of λi so that Eqn. 59 and Eqn. 61 are satisfied.
Global optimum via the geometry of training data. Recall that we assume our training data {(xi, yi)}ni=1 satisfies for any w ∈ Rd, there exist d linearly independent {xwi }di=1 ⊂ X , where X = {xi}ni=1, so that w>xwi ≥ 0 and −xwi ∈X for i = 1..d, e.g., an orthogonal basis of Rd and their opposite vectors. We will show that under this data regime, we have
(a) for any particular w, there indeed exist a set of λi that can satisfy the constraints Eqn. 59 and Eqn. 61 for this particular w.
(b) For any w1 and w2 that activate the exact same set of {xi}, the same set of λi can satisfy the constraints Eqn. 59 and Eqn. 61 of both w1 and w2.
(c) Whenever we rotate a w1 to a w2 so that the set of xi being activated changed, we can still find λi that satisfy constraint of both w1 and w2.
Combining (a), (b) and (c) implies there exists a set of λ that satisfy the constraints for all w. Hence, it remains to show these three claims.
We first prove Claim (a). For each w, we must find a set of λi so that the following hold.
β(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) , β′w = w >βw βw + β ′ w ·w = 2βg
Here, βg and w are fixed, and w is a vector on the unit sphere. It is easy to see that βw is then determined by βg and w, and there indeed exists a solution (solving a consistent linear system). Hence we are left with a linear system with d linear equations
β(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) ∀k ∈ [d]
to solve with free variables being λi so that w activates xi, i.e., w>xi ≥ 0. Because the training data {(xi, yi)}ni=1 satisfies for any w, there exist at least d linearly independent xi that activate w. This guarantees for any w we must have at least d free variables. It follows that there must exist solutions λi to the linear system. This proves Claim (a).
Next, we show that (b) for anyw1 andw2 that activate the exact same set of {xi}, the same set of λi can satisfy the constraints Eqn. 59 and Eqn. 61 of bothw1 andw2. Becausew1 andw2 are activated by the same set of xi, this implies
βw1 = c · n∑ i=1 λi · xi · I ( w>1 xi ≥ 0 ) = c · n∑ i=1 λi · xi · I ( w>2 xi ≥ 0 ) = βw2
Since λi already satisfy constraint Eqn. 59 for w1, they also satisfy that for w2. Thus, it remains to show that βw1 + β ′ w1 · w1 = βw2 + β ′ w2 · w1 assuming βw1 = βw2 , β ′ w1 = w > 1 βw1 , and β′w2 = w > 2 βw2 . This indeed holds because
βw1 + β ′ w1 ·w1 = βw2 + β ′ w2 ·w2
⇐⇒ β′w1 ·w > 1 = β ′ w2 ·w > 2 ⇐⇒ w>1 βw1w>1 = w>2 βw2w>2 ⇐⇒ w>1 w1β>w1 = w > 2 w2β > w2 ⇐⇒ 1 · β>w1 = 1 · β > w2
⇐⇒ βw1 = βw1 Here, we used the fact that w1 and w2 are vectors on the unit sphere. This proves Claim (b).
Finally, we show (c) that Whenever we rotate a w1 to a w2 so that the set of xi being activated changed, we can still find λi that satisfy constraint of both w1 and w2. Suppose we rotate w1 to w2 so that w2 lost activation with x1,x2, ...,xp which in the set of linearly independent xi’s being activated byw1 and their opposite vectors −xi are also in the training set (without loss of generality). Then w2 must now also get activated by −x1,−x2, ...,−xp. This is because if w>2 xi < 0, we must have w>2 (−xi) > 0. Recall that in the proof of Claim (a), we only needed the λi from linearly independent xi that we used to solve the linear systems, and their opposite as the free variables to solve the linear system of
d equations. Hence, we can set λ to 0 for the other xi while still satisfying the linear system. Then, suppose there exists λi that satisfy
β(k)w1 = c · d∑ i=1 λi · x(k)i
where the xi are the linearly independent vectors that activatew1 with opposite vectors in the training set, which we have proved in (a). Then we can satisfy the constraint for βw2 below
β(k)w2 = c · p∑ i=1 λ̂i · (−xi)(k) + d∑ i=p+1 λi · x(k)i
by setting λ̂i = −λi for i = 1...p. Indeed, this gives
β(k)w2 = c · p∑ i=1 (−λi) · (−xi)(k) + d∑ i=p+1 λi · x(k)i
= c · d∑ i=1 λi · x(k)i
Thus, we can also find λi that satisfy the constraint for βw2 . Here, we do not consider the case where w2 is parallel with an xi because such w2 has measure zero. Note that we can apply this argument iteratively because the flipping the sign always works and will not create any inconsistency.
Moreover, we can show that the constraint for β′w2 is satisfied by a similar argument as in proof of Claim (b). This follows from the fact that our construction makes βw1 = βw2 . Then we can follow the same argument as in (b) to show that βw1 + β ′ w1 ·w1 = βw2 + β ′ w2 ·w1. This completes the proof of Claim (c).
In summary, combining Claim (a), (b) and (c) gives that Claim 1 holds. That is, given our training data, the global optimum to the constrained optimization problem of finding the min-norm solution among functions that fit the training data satisfies βw+β′w ·w = 2βg . We also showed that this claim implies exact extrapolation, i.e., the network’s learned function f(x) is equal to the true underlying function g(x) for all x ∈ Rd. This completes the proof.
B.3 PROOF OF THEOREM 2
Proof of the asymptotic convergence to extrapolation builds upon our proof of exact extrapolation, i.e., Lemma 1. The proof idea is that if the training data distribution has support at all directions, when the number of samples n→∞, asymptotically the training set will converge to some imaginary training set that satisfies the condition for exact extrapolation. Since if training data are close the neural tangent kernels are also close, the predictions or learned function will converge to a function that achieves perfect extrapolation, that is, the true underlying function.
Asymptotic convergence of data sets. We first show the training data converge to a data set that satisfies the exact extrapolation condition in Lemma 1. Suppose training data {xi}ni=1 are sampled from a distribution whose support contains a connected set S that intersects all directions, i.e., for any non-zero w ∈ Rd, there exists k > 0 so that kw ∈ S. Let us denote by S the set of datasets that satisfy the condition in Lemma 1. In fact, we will use a relaxed condition in the proof of Lemma 1 (Lemma 1 in the main text uses a stricter condition for simplicity of exposition). Given a general dataset X and a dataset S ∈ S of the same size n, let σ(X,S) denote a matching of their data points, i.e., σ outputs a sequence of pairs
σ(X,S)i = (xi, si) for i ∈ [n] s.t. X = {xi}ni=1
S = {si}ni=1
Let ` : Rd × Rd → R be the l2 distance that takes in a pair of points. We then define the distance between the datasets d(X,S) as the minimum sum of l2 distances of their data points over all
possible matching.
d(X,S) = minσ n∑ i=1 ` (σ (X,S)i) |X| = |S| = n
∞ |X| 6= |S|
We can then define a “closest distance to perfect dataset” function D∗ : X → R which maps a dataset X to the minimum distance ofX to any dataset in S
D∗ (X) = min S∈S d (X,S)
It is easy to see that for any datasetX = {xi}ni=1, D∗ (X) can be bounded by the minimum of the closest distance to perfect dataset D∗ of sub-datasets ofX of size 2d.
D∗ ({xi}ni=1) ≤ bn/2dc min k=1 D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) (62)
This is because for any S ∈ S, and any S ⊆ S′, we must have S′ ∈ S because a dataset satisfies exact extrapolation condition as long as it contains some key points. Thus, adding more data will not hurt, i.e., for anyX1 ⊆X2, we always have
D∗ (X1) ≤ D∗ (X2)
Now let us denote by Xn a random dataset of size n where each xi ∈ Xn is sampled from the training distribution. Recall that our training data {xi}ni=1 are sampled from a distribution whose support contains a connected set S∗ that intersects all directions, i.e., for any non-zerow ∈ Rd, there exists k > 0 so that kw ∈ S∗. It follows that for a random dataset X2d of size 2d, the probability that D∗(X2d) > happens is less than 1 for any > 0. First there must exist S0 = {si}2di=1 ∈ S of size 2d, e.g., orthogonal basis and their opposite vectors. Observe that if we scale any si by k > 0, the resulting dataset is still in S by the definition of S . We denote the set of datasets where we are allowed to scale elements of S0 by S0. It follows that
P (D∗(X2d) > ) = P (
min S∈S d (X2d,S) > ) ≤ P ( min S∈S0 d (X2d,S) >
) = P ( min S∈S0 min σ n∑ i=1 ` (σ (X2d,S)i) > )
= 1− P ( min S∈S0 min σ n∑ i=1 ` (σ (X2d,S)i) ≤ )
≤ 1− P (
min S∈S0 min σ n max i=1
` (σ (X2d,S)i) ≤ )
≤ δ < 1
where we denote the bound of P (D∗(X2d) > ) by δ < 1, and the last step follows from P (
min S∈S0 min σ n max i=1
` (σ (X2d,S)i) ≤ ) > 0
which further follows from the fact that for any si ∈ S0, by the assumption on training distribution, we can always find k > 0 so that ksi ∈ S∗, a connected set in the support of training distribution. By the connectivity of support S∗, ksi cannot be an isolated point in S∗, so for any > 0, we must have∫
‖x−ksi‖≤ ,x∈S∗
fX(x)dx > 0
Hence, we can now apply Eqn. 62 to bound D∗(Xn). Given any > 0, we have
P (D∗(Xn) > ) = 1− P (D∗(Xn) ≤ ) ≤ 1− P (bn/2dc
min k=1
D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) ≤ )
≤ 1− 1− bn/2dc∏ k=1 P ( D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) > )
= bn/2dc∏ k=1 P ( D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) > ) ≤ δbn/2dc
Here δ < 1. This implies D∗(Xn) p−→ 0, i.e.,
lim n→∞
P (D∗(Xn) > ) = 0 ∀ > 0 (63)
Eqn. 63 says as the number of training samples n→∞, our training set will converge in probability to a dataset that satisfies the requirement for exact extrapolation.
Asymptotic convergence of predictions. Let NTK(x,x′) : Rd × Rd → R denote the neural tangent kernel for a two-layer ReLU MLP. It is easy to see that if x → x∗, then NTK(x, ·) → NTK(x∗, ·) (Arora et al. (2019b)). Let NTKtrain denote the n× n kernel matrix for training data. We have shown that our training set converges to a perfect data set that satisfies conditions of exact extrapolation. Moreover, note that our training set will only have a finite number of (not increase with n) xi that are not precisely the same as those in a perfect dataset. This is because a perfect data only contains a finite number of key points and the other points can be replaced by any other points while still being a perfect data set. Thus, we have NTKtrain → N∗, where N∗ is the n× n NTK matrix for some perfect data set.
Because neural tangent kernel is positive definite, we have NTK−1train → N∗ −1
. Recall that for any x ∈ Rd, the prediction of NTK is
fNTK(x) = (NTK(x | 1. What is the main focus of the paper regarding neural network extrapolation?
2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis and practical insight?
3. Do you have any concerns or limitations regarding the paper's assumptions and conditions?
4. How do the results of the paper apply to other scenarios or applications?
5. Can the proof of the theorems be generalized to other types of neural networks or loss functions? | Review | Review
Summary
The paper studies how neural networks extrapolate. The authors theoretically examine two-layer ReLU MLPs with mean squared loss in the NTK regime and, building on these results, GNNs. They find that the MLPs quickly converge to linear functions along any direction from the origin, but can provably learn a linear target function where the training distribution is sufficiently diverse. For GNNs, they propose a hypothesis that the success of extrapolating algorithmic tasks to new data relies on encoding task-specific non-linearities in the architecture or features. The theoretical results are supported by empirical results which sometimes go beyond the specific conditions of the theorems (like increasing the number of layers in the MLP to 4 in Appendix C.1.).
Pros
The paper provides both theoretical and practical insight into the extrapolation capabilities of neural networks, especially GNNs.
I especially liked the part about GNNs and the hypothesis that if we can encode the non-linearities outside the MLPs so the MLPs only have to learn linear functions, GNNs will extrapolate well.
Overall I found the paper very interesting and fun to read.
Concerns
The theoretical MLP results are very specific. Sometimes this is not apparent from either the abstract or the discussion of the results. Some of the constraints:
The MLPs have two layers, which I find the most limiting constraint as most practical MLPs have more layers.
The mean squared loss is used throughout the paper. I think this is not emphasized enough (it is mentioned only a single time in the paper). As far as I understand the proofs also rely on the loss, so the loss should be included in the conditions of the theorems.
We are under the NTK regime, which is of course evident from the techniques used. Nevertheless, this is not mentioned in the abstract.
The MLPs are ReLU MLPs which is emphasized sufficiently in the paper. The authors include preliminary empirical results for other activations functions in the Appendix (sin, quadratic, and tanh).
Questions
Could the proofs of Theorem 3 and Theorem 5 be generalized to MLPs with more layers?
Can we gain some insight into extrapolation with other loss functions like softmax based on these results?
Reasons for ranking
I found the paper really interesting and gained much insight from it. Some of the constraints of the MLPs are not emphasized enough and the writing is more general at parts than the results warrant. Even with the constraints I believe that this is an important step and sheds light on the extrapolation capabilities of neural networks. If the constraints can be made clearer I'm willing to improve my score further.
Minor comments
Second to last paragraph on page 5: "for Theoreom 5" should be "for Theorem 5".
Caption of Figure 1: outisde => outside
In 4.2., "Experiments: architectures that help extrapolation": "GNNs with max-readout are better than GNNs with sum-readout (Fig 6a)" should be Fig 5a. |
ICLR | Title
How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks
Abstract
We study how neural networks trained by gradient descent extrapolate, i.e., what they learn outside the support of the training distribution. Previous works report mixed empirical results when extrapolating with neural networks: while feedforward neural networks, a.k.a. multilayer perceptrons (MLPs), do not extrapolate well in certain simple tasks, Graph Neural Networks (GNNs) – structured networks with MLP modules – have shown some success in more complex tasks. Working towards a theoretical explanation, we identify conditions under which MLPs and GNNs extrapolate well. First, we quantify the observation that ReLU MLPs quickly converge to linear functions along any direction from the origin, which implies that ReLU MLPs do not extrapolate most nonlinear functions. But, they can provably learn a linear target function when the training distribution is sufficiently “diverse”. Second, in connection to analyzing the successes and limitations of GNNs, these results suggest a hypothesis for which we provide theoretical and empirical evidence: the success of GNNs in extrapolating algorithmic tasks to new data (e.g., larger graphs or edge weights) relies on encoding task-specific non-linearities in the architecture or features. Our theoretical analysis builds on a connection of over-parameterized networks to the neural tangent kernel. Empirically, our theory holds across different training settings.
1 INTRODUCTION
Humans extrapolate well in many tasks. For example, we can apply arithmetics to arbitrarily large numbers. One may wonder whether a neural network can do the same and generalize to examples arbitrarily far from the training data (Lake et al., 2017). Curiously, previous works report mixed extrapolation results with neural networks. Early works demonstrate feedforward neural networks, a.k.a. multilayer perceptrons (MLPs), fail to extrapolate well when learning simple polynomial functions (Barnard & Wessels, 1992; Haley & Soloway, 1992). However, recent works show Graph Neural Networks (GNNs) (Scarselli et al., 2009), a class of structured networks with MLP building blocks, can generalize to graphs much larger than training graphs in challenging algorithmic tasks, such as predicting the time evolution of physical systems (Battaglia et al., 2016), learning graph algorithms (Velickovic et al., 2020), and solving mathematical equations (Lample & Charton, 2020).
To explain this puzzle, we formally study how neural networks trained by gradient descent (GD) extrapolate, i.e., what they learn outside the support of training distribution. We say a neural network extrapolates well if it learns a task outside the training distribution. At first glance, it may seem that neural networks can behave arbitrarily outside the training distribution since they have high capacity (Zhang et al., 2017) and are universal approximators (Cybenko, 1989; Funahashi, 1989; Hornik et al., 1989; Kurkova, 1992). However, neural networks are constrained by gradient descent training (Hardt et al., 2016; Soudry et al., 2018). In our analysis, we explicitly consider such implicit bias through the analogy of the training dynamics of over-parameterized neural networks and kernel regression via the neural tangent kernel (NTK) (Jacot et al., 2018).
Starting with feedforward networks, the simplest neural networks and building blocks of more complex architectures such as GNNs, we establish that the predictions of over-parameterized MLPs with ReLU activation trained by GD converge to linear functions along any direction from the origin. We prove a convergence rate for two-layer networks and empirically observe that convergence often occurs close to the training data (Figure 1), which suggests ReLU MLPs cannot extrapolate well for most nonlinear tasks. We emphasize that our results do not follow from the fact that ReLU networks have finitely many linear regions (Arora et al., 2018; Hanin & Rolnick, 2019; Hein et al., 2019). While having finitely many linear regions implies ReLU MLPs eventually become linear, it does not say whether MLPs will learn the correct target function close to the training distribution. In contrast, our results are non-asymptotic and quantify what kind of functions MLPs will learn close to the training distribution. Second, we identify a condition when MLPs extrapolate well: the task is linear and the geometry of the training distribution is sufficiently “diverse”. To our knowledge, our results are the first extrapolation results of this kind for feedforward neural networks.
We then relate our insights into feedforward neural networks to GNNs, to explain why GNNs extrapolate well in some algorithmic tasks. Prior works report successful extrapolation for tasks that can be solved by dynamic programming (DP) (Bellman, 1966), which has a computation structure aligned with GNNs (Xu et al., 2020). DP updates can often be decomposed into nonlinear and linear steps. Hence, we hypothesize that GNNs trained by GD can extrapolate well in a DP task, if we encode appropriate non-linearities in the architecture and input representation (Figure 2). Importantly, encoding non-linearities may be unnecessary for GNNs to interpolate, because the MLP modules can easily learn many nonlinear functions inside the training distribution (Cybenko, 1989; Hornik et al., 1989; Xu et al., 2020), but it is crucial for GNNs to extrapolate correctly. We prove this hypothesis for a simplified case using Graph NTK (Du et al., 2019b). Empirically, we validate the hypothesis on three DP tasks: max degree, shortest paths, and n-body problem. We show GNNs with appropriate architecture, input representation, and training distribution can predict well on graphs with unseen sizes, structures, edge weights, and node features. Our theory explains the empirical success in previous works and suggests their limitations: successful extrapolation relies on encoding task-specific non-linearities, which requires domain knowledge or extensive model search. From a broader standpoint, our insights go beyond GNNs and apply broadly to other neural networks.
To summarize, we study how neural networks extrapolate. First, ReLU MLPs trained by GD converge to linear functions along directions from the origin with a rate of O(1/t). Second, to explain why GNNs extrapolate well in some algorithmic tasks, we prove that ReLU MLPs can extrapolate well in linear tasks, leading to a hypothesis: a neural network can extrapolate well when appropriate nonlinearities are encoded into the architecture and features. We prove this hypothesis for a simplified case and provide empirical support for more general settings.
1.1 RELATED WORK
Early works show example tasks where MLPs do not extrapolate well, e.g. learning simple polynomials (Barnard & Wessels, 1992; Haley & Soloway, 1992). We instead show a general pattern of how ReLU MLPs extrapolate and identify conditions for MLPs to extrapolate well. More recent works study the implicit biases induced on MLPs by gradient descent, for both the NTK and mean field regimes (Bietti & Mairal, 2019; Chizat & Bach, 2018; Song et al., 2018). Related to our results, some works show MLP predictions converge to “simple” piecewise linear functions, e.g., with few linear regions (Hanin & Rolnick, 2019; Maennel et al., 2018; Savarese et al., 2019; Williams et al., 2019). Our work differs in that none of these works explicitly studies extrapolation, and some focus only on one-dimensional inputs. Recent works also show that in high-dimensional settings of the NTK regime, MLP is asymptotically at most a linear predictor in certain scaling limits (Ba et al., 2020; Ghorbani et al., 2019). We study a different setting (extrapolation), and our analysis is non-asymptotic in nature and does not rely on random matrix theory.
Prior works explore GNN extrapolation by testing on larger graphs (Battaglia et al., 2018; Santoro et al., 2018; Saxton et al., 2019; Velickovic et al., 2020). We are the first to theoretically study GNN extrapolation, and we complete the notion of extrapolation to include unseen features and structures.
2 PRELIMINARIES
We begin by introducing our setting. Let X be the domain of interest, e.g., vectors or graphs. The task is to learn an underlying function g : X → R with a training set {(xi, yi)}ni=1 ⊂ D, where yi = g(xi) and D is the support of training distribution. Previous works have extensively studied in-distribution generalization where the training and the test distributions are identical (Valiant, 1984; Vapnik, 2013); i.e., D = X . In contrast, extrapolation addresses predictions on a domain X that is larger than the support of the training distribution D. We will say a model extrapolates well if it has a small extrapolation error.
Definition 1. (Extrapolation error). Let f : X → R be a model trained on {(xi, yi)}ni=1 ⊂ D with underlying function g : X → R. Let P be a distribution over X \ D and let ` : R× R→ R be a loss function. We define the extrapolation error of f as Ex∼P [`(f(x), g(x))].
We focus on neural networks trained by gradient descent (GD) or its variants with squared loss. We study two network architectures: feedforward and graph neural networks.
Graph Neural Networks. GNNs are structured networks operating on graphs with MLP modules (Battaglia et al., 2018; Xu et al., 2019). Let G = (V,E) be a graph. Each node u ∈ V has a feature vector xu, and each edge (u, v) ∈ E has a feature vector w(u,v). GNNs recursively compute node representations h(k)u at iteration k (Gilmer et al., 2017; Xu et al., 2018). Initially, h (0) u = xu. For k = 1..K, GNNs update h(k)u by aggregating the neighbor representations. We can optionally compute a graph representation hG by aggregating the final node representations. That is,
h(k)u = ∑
v∈N (u)
MLP(k) ( h(k−1)u ,h (k−1) v ,w(v,u) ) , hG = MLP(K+1) (∑ u∈G h(K)u ) . (1)
The final output is the graph representation hG or final node representations h (K) u depending on the task. We refer to the neighbor aggregation step for h(k)u as aggregation and the pooling step in hG as readout. Previous works typically use sum-aggregation and sum-readout (Battaglia et al., 2018). Our results indicate why replacing them may help extrapolation (Section 4).
3 HOW FEEDFORWARD NEURAL NETWORKS EXTRAPOLATE
Feedforward networks are the simplest neural networks and building blocks of more complex architectures such as GNNs, so we first study how they extrapolate when trained by GD. Throughout the paper, we assume ReLU activation. Section 3.3 contains preliminary results for other activations.
3.1 LINEAR EXTRAPOLATION BEHAVIOR OF RELU MLPS
By architecture, ReLU networks learn piecewise linear functions, but what do these regions precisely look like outside the support of the training data? Figure 1 illustrates examples of how ReLU MLPs extrapolate when trained by GD on various nonlinear functions. These examples suggest that outside the training support, the predictions quickly become linear along directions from the origin. We systematically verify this pattern by linear regression on MLPs’ predictions: the coefficient of determination (R2) is always greater than 0.99 (Appendix C.2). That is, ReLU MLPs “linearize” almost immediately outside the training data range.
We formalize this observation using the implicit biases of neural networks trained by GD via the neural tangent kernel (NTK): optimization trajectories of over-parameterized networks trained by GD are equivalent to those of kernel regression with a specific neural tangent kernel, under a set of assumptions called the “NTK regime” (Jacot et al., 2018). We provide an informal definition here; for further details, we refer the readers to Jacot et al. (2018) and Appendix A.
Definition 2. (Informal) A neural network trained in the NTK regime is infinitely wide, randomly initialized with certain scaling, and trained by GD with infinitesimal steps.
Prior works analyze optimization and in-distribution generalization of over-parameterized neural networks via NTK (Allen-Zhu et al., 2019a;b; Arora et al., 2019a;b; Cao & Gu, 2019; Du et al., 2019c;a; Li & Liang, 2018; Nitanda & Suzuki, 2021). We instead analyze extrapolation.
Theorem 1 formalizes our observation from Figure 1: outside the training data range, along any direction tv from the origin, the prediction of a two-layer ReLU MLP quickly converges to a linear
function with rate O( 1t ). The linear coefficients βv and the constant terms in the convergence rate depend on the training data and direction v. The proof is in Appendix B.1. Theorem 1. (Linear extrapolation). Suppose we train a two-layer ReLU MLP f : Rd → R with squared loss in the NTK regime. For any direction v ∈ Rd, let x0 = tv. As t → ∞, f(x0 + hv)− f(x0)→ βv · h for any h > 0, where βv is a constant linear coefficient. Moreover, given > 0, for t = O( 1 ), we have |
f(x0+hv)−f(x0) h − βv| < .
ReLU networks have finitely many linear regions (Arora et al., 2018; Hanin & Rolnick, 2019), hence their predictions eventually become linear. In contrast, Theorem 1 is a more fine-grained analysis of how MLPs extrapolate and provides a convergence rate. While Theorem 1 assumes two-layer networks in the NTK regime, experiments confirm that the linear extrapolation behavior happens across networks with different depths, widths, learning rates, and batch sizes (Appendix C.1 and C.2). Our proof technique potentially also extends to deeper networks.
Theorem 1 implies which target functions a ReLU MLP may be able to match outside the training data: only functions that are almost-linear along the directions away from the origin. Indeed, Figure 4a shows ReLU MLPs do not extrapolate target functions such as x>Ax (quadratic), ∑d i=1 cos(2π ·x(i))
(cos), and ∑d i=1 √ x(i) (sqrt), where x(i) is the i-th dimension of x. With suitable hyperparameters, MLPs extrapolate the L1 norm correctly, which satisfies the directional linearity condition.
Figure 4a provides one more positive result: MLPs extrapolate linear target functions well, across many different hyperparameters. While learning linear functions may seem very limited at first, in Section 4 this insight will help explain extrapolation properties of GNNs in non-linear practical tasks. Before that, we first theoretically analyze when MLPs extrapolate well.
3.2 WHEN RELU MLPS PROVABLY EXTRAPOLATE WELL
Figure 4a shows that MLPs can extrapolate well when the target function is linear. However, this is not always true. In this section, we show that successful extrapolation depends on the geometry of training data. Intuitively, the training distribution must be “diverse” enough for correct extrapolation.
We provide two conditions that relate the geometry of the training data to extrapolation. Lemma 1 states that over-parameterized MLPs can learn a linear target function with only 2d examples. Lemma 1. Let g(x) = β>x be the target function for β ∈ Rd. Suppose {xi}ni=1 contains an orthogonal basis {x̂i}di=1 and {−x̂i}di=1. If we train a two-layer ReLU MLP f on {(xi, yi)}ni=1 with squared loss in the NTK regime, then f(x) = β>x for all x ∈ Rd.
Lemma 1 is mainly of theoretical interest, as the 2d examples need to be carefully chosen. Theorem 2 builds on Lemma 1 and identifies a more practical condition for successful extrapolation: if the support of the training distribution covers all directions (e.g., a hypercube that covers the origin), the MLP converges to a linear target function with sufficient training data. Theorem 2. (Conditions for extrapolation). Let g(x) = β>x be the target function for β ∈ Rd. Suppose {xi}ni=1 is sampled from a distribution whose support D contains a connected subset S, where for any non-zero w ∈ Rd, there exists k > 0 so that kw ∈ S. If we train a two-layer ReLU MLP f : Rd → R on {(xi, yi)}ni=1 with squared loss in the NTK regime, f(x) p−→ β>x as n→∞.
Experiments: geometry of training data affects extrapolation. The condition in Theorem 2 formalizes the intuition that the training distribution must be “diverse” for successful extrapolation, e.g., D includes all directions. Empirically, the extrapolation error is indeed small when the condition of Theorem 2 is satisfied (“all” in Figure 4b). In contrast, the extrapolation error is much larger when the training examples are restricted to only some directions (Figure 4b and Figure 3).
Relating to previous works, Theorem 2 suggests why spurious correlations may hurt extrapolation, complementing the causality arguments (Arjovsky et al., 2019; Peters et al., 2016; Rojas-Carulla et al., 2018). When the training data has spurious correlations, some combinations of features are missing; e.g., camels might only appear in deserts in an image collection. Therefore, the condition for Theorem 2 no longer holds, and the model may extrapolate incorrectly. Theorem 2 is also analogous to an identifiability condition for linear models, but stricter. We can uniquely identify a linear function if the training data has full (feature) rank. MLPs are more expressive, so identifying the linear target function requires additional constraints.
To summarize, we analyze how ReLU MLPs extrapolate and provide two insights: (1) MLPs cannot extrapolate most nonlinear tasks due to their linear extrapolation (Theorem 1); and (2) MLPs extrapolate well when the target function is linear, if the training distribution is “diverse” (Theorem 2). In the next section, these results help us understand how more complex networks extrapolate.
3.3 MLPS WITH OTHER ACTIVATION FUNCTIONS
Before moving on to GNNs, we complete the picture of MLPs with experiments on other activation functions: tanh σ(x) = tanh(x), cosine σ(x) = cos(x) (Lapedes & Farber, 1987; McCaughan, 1997; Sopena & Alquezar, 1994), and quadratic σ(x) = x2 (Du & Lee, 2018; Livni et al., 2014). Details are in Appendix C.4. MLPs extrapolate well when the activation and target function are similar; e.g., tanh activation extrapolates well when learning tanh, but not other functions (Figure 5). Moreover, each activation function has different limitations. To extrapolate the tanh function with tanh activation, the training data range has to be sufficiently wide. When learning a quadratic function with quadratic activation, only two-layer networks extrapolate well as more layers lead to higher-order polynomials. Cosine activations are hard to optimize for high-dimensional data, so we only consider one/two dimensional cosine target functions.
4 HOW GRAPH NEURAL NETWORKS EXTRAPOLATE
Above, we saw that extrapolation in nonlinear tasks is hard for MLPs. Despite this limitation, GNNs have been shown to extrapolate well in some nonlinear algorithmic tasks, such as intuitive physics (Battaglia et al., 2016; Janner et al., 2019), graph algorithms (Battaglia et al., 2018; Velickovic et al., 2020), and symbolic mathematics (Lample & Charton, 2020). To address this discrepancy, we build on our MLP results and study how GNNs trained by GD extrapolate.
4.1 HYPOTHESIS: LINEAR ALGORITHMIC ALIGNMENT HELPS EXTRAPOLATION
We start with an example: training GNNs to solve the shortest path problem. For this task, prior works observe that a modified GNN architecture with min-aggregation can generalize to graphs larger than those in the training set (Battaglia et al., 2018; Velickovic et al., 2020):
h(k)u = min v∈N (u)
MLP(k) ( h(k−1)u ,h (k−1) v ,w(v,u) ) . (2)
We first provide an intuitive explanation (Figure 2a). Shortest path can be solved by the Bellman-Ford (BF) algorithm (Bellman, 1958) with the following update:
d[k][u] = min v∈N (u)
d[k − 1][v] +w(v, u), (3)
where w(v, u) is the weight of edge (v, u), and d[k][u] is the shortest distance to node u within k steps. The two equations can be easily aligned: GNNs simulate the BF algorithm if its MLP modules learn a linear function d[k−1][v]+w(v, u). Since MLPs can extrapolate linear tasks, this “alignment” may explain why min-aggregation GNNs can extrapolate well in this task.
For comparison, we can reason why we would not expect GNNs with the more commonly used sum-aggregation (Eqn. 1) to extrapolate well in this task. With sum-aggregation, the MLP modules need to learn a nonlinear function to simulate the BF algorithm, but Theorem 1 suggests that they will not extrapolate most nonlinear functions outside the training support.
We can generalize the above intuition to other algorithmic tasks. Many tasks where GNNs extrapolate well can be solved by dynamic programming (DP) (Bellman, 1966), an algorithmic paradigm with a recursive structure similar to GNNs’ (Eqn. 1) (Xu et al., 2020).
Definition 3. Dynamic programming (DP) is a recursive procedure with updates
Answer[k][s] = DP-Update({Answer[k − 1][s′]} , s′ = 1...n), (4)
where Answer[k][s] is the solution to a sub-problem indexed by iteration k and state s, and DP-Update is a task-specific update function that solves the sub-problem based on the previous iteration.
From a broader standpoint, we hypothesize that: if we encode appropriate non-linearities into the model architecture and input representations so that the MLP modules only need to learn nearly linear steps, then the resulting neural network can extrapolate well.
Hypothesis 1. (Linear algorithmic alignment). Let f : X → R be the underlying function and N a neural network with m MLP modules. Suppose there exist m linear functions {gi}mi=1 so that by replacing N ’s MLP modules with gi’s, N simulates f . Given > 0, there exists {(xi, f(xi))}ni=1 ⊂ D ( X so that N trained on {(xi, f(xi))}ni=1 by GD with squared loss learns f̂ with ‖f̂ − f‖ < .
Our hypothesis builds on the algorithmic alignment framework of (Xu et al., 2020), which states that a neural network interpolates well if the modules are “aligned” to easy-to-learn (possibly nonlinear) functions. Successful extrapolation is harder: the modules need to align with linear functions.
Applications of linear algorithmic alignment. In general, linear algorithmic alignment is not restricted to GNNs and applies broadly to neural networks. To satisfy the condition, we can encode appropriate nonlinear operations in the architecture or input representation (Figure 2). Learning DP algorithms with GNNs is one example of encoding non-linearity in the architecture (Battaglia et al., 2018; Corso et al., 2020). Another example is to encode log-and-exp transforms in the architecture to help extrapolate multiplication in arithmetic tasks (Trask et al., 2018; Madsen & Johansen, 2020). Neural symbolic programs take a step further and encode a library of symbolic operations to help extrapolation (Johnson et al., 2017; Mao et al., 2019; Yi et al., 2018).
For some tasks, it may be easier to change the input representation (Figure 2b). Sometimes, we can decompose the target function f as f = g ◦ h into a feature embedding h and a “simpler” target function g that our model can extrapolate well. We can obtain h via specialized features or feature transforms using domain knowledge (Lample & Charton, 2020; Webb et al., 2020), or via representation learning (e.g., BERT) with unlabeled out-of-distribution data in X \ D (Chen et al., 2020; Devlin et al., 2019; Hu et al., 2020; Mikolov et al., 2013b; Peters et al., 2018). This brings a new perspective of how representations help extrapolation in various application areas. For example, in natural language processing, pretrained representations (Mikolov et al., 2013a; Wu & Dredze, 2019) and feature transformation using domain knowledge (Yuan et al., 2020; Zhang et al., 2019) help models generalize across languages, a special type of extrapolation. In quantitative finance, identifying the right “factors” or features is crucial for deep learning models as the financial markets may frequently be in extrapolation regimes (Banz, 1981; Fama & French, 1993; Ross, 1976).
Linear algorithmic alignment explains successful extrapolation in the literature and suggests that extrapolation is harder in general: encoding appropriate non-linearity often requires domain expertise or model search. Next, we provide theoretical and empirical support for our hypothesis.
4.2 THEORETICAL AND EMPIRICAL SUPPORT
We validate our hypothesis on three DP tasks: max degree, shortest path, and n-body problem, and prove the hypothesis for max degree. We highlight the role of graph structures in extrapolation.
Theoretical analysis. We start with a simple yet fundamental task: learning the max degree of a graph, a special case of DP with one iteration. As a corollary of Theorem 1, the commonly used sum-based GNN (Eqn. 1) cannot extrapolate well (proof in Appendix B.4).
Corollary 1. GNNs with sum-aggregation and sum-readout do not extrapolate well in Max Degree.
To achieve linear algorithmic alignment, we can encode the only non-linearity, the max function, in the readout. Theorem 3 confirms that a GNN with max-readout can extrapolate well in this task. Theorem 3. (Extrapolation with GNNs). Assume all nodes have the same feature. Let g and g′ be the max/min degree function, respectively. Let {(Gi, g(Gi)}ni=1 be the training set. If {(g(Gi), g′(Gi), g(Gi) · Nmaxi , g′(Gi) · Nmini )}ni=1 spans R4, where Nmaxi and Nmini are the number of nodes that have max/min degree on Gi, then one-layer max-readout GNNs trained on {(Gi, g(Gi))}ni=1 with squared loss in the NTK regime learn g.
Theorem 3 does not follow immediately from Theorem 2, because MLP modules in GNNs only receive indirect supervision. We analyze the Graph NTK (Du et al., 2019b) to prove Theorem 3 in Appendix B.5. While Theorem 3 assumes identical node features, we empirically observe similar results for both identical and non-identical features (Figure 16 in Appendix).
Interpretation of conditions. The condition in Theorem 3 is analogous to that in Theorem 2. Both theorems require diverse training data, measured by graph structure in Theorem 3 or directions in Theorem 2. In Theorem 3, the condition is violated if all training graphs have the same max or min node degrees, e.g., when training data are from one of the following families: path, C-regular graphs (regular graphs with degree C), cycle, and ladder.
Experiments: architectures that help extrapolation. We validate our theoretical analysis with two DP tasks: max degree and shortest path (details in Appendix C.5 and C.6). While previous works only test on graphs with different sizes (Battaglia et al., 2018; Velickovic et al., 2020), we also test on graphs with unseen structure, edge weights and node features. The results support our theory. For max degree, GNNs with max-readout are better than GNNs with sum-readout (Figure 6a), confirming Corollary 1 and Theorem 3. For shortest path, GNNs with min-readout and min-aggregation are better than GNNs with sum-readout (Figure 6a).
Experiments confirm the importance of training graphs structure (Figure 7). Interestingly, the two tasks favor different graph structure. For max degree, as Theorem 3 predicts, GNNs extrapolate well when trained on trees, complete graphs, expanders, and general graphs, and extrapolation errors are
higher when trained on 4-regular, cycles, or ladder graphs. For shortest path, extrapolation errors follow a U-shaped curve as we change the sparsity of training graphs (Figure 7b and Figure 18 in Appendix). Intuitively, models trained on sparse or dense graphs likely learn degenerative solutions.
Experiments: representations that help extrapolation. Finally, we show a good input representation helps extrapolation. We study the n-body problem (Battaglia et al., 2016; Watters et al., 2017) (Appendix C.7), that is, predicting the time evolution of n objects in a gravitational system. Following previous work, the input is a complete graph where the nodes are the objects (Battaglia et al., 2016). The node feature for u is the concatenation of the object’s mass mu, position x (t) u , and velocity v (t) u at time t. The edge features are set to zero. We train GNNs to predict the velocity of each object u at time t+ 1. The true velocity f(G;u) for object u is approximately
f(G;u) ≈ vtu + atu · dt, atu = C · ∑ v 6=u mv ‖xtu − xtv‖32 · ( xtv − xtu ) , (5)
where C is a constant. To learn f , the MLP modules need to learn a nonlinear function. Therefore, GNNs do not extrapolate well to unseen masses or distances (“original features” in Figure 6b). We instead use an improved representation h(G) to encode non-linearity. At time t, we transform the edge features of (u, v) from zero to w(t)(u,v) = mv · ( x (t) v − x(t)u ) /‖x(t)u − x(t)v ‖32. The new edge features do not add information, but the MLP modules now only need to learn linear functions, which helps extrapolation (“improved features” in Figure 6b).
5 CONNECTIONS TO OTHER OUT-OF-DISTRIBUTION SETTINGS
We discuss several related settings. Intuitively, from the viewpoint of our results above, methods in related settings may improve extrapolation by 1) learning useful non-linearities beyond the training data range and 2) mapping relevant test data to the training data range.
Domain adaptation studies generalization to a specific target domain (Ben-David et al., 2010; Blitzer et al., 2008; Mansour et al., 2009). Typical strategies adjust the training process: for instance, use unlabeled samples from the target domain to align the target and source distributions (Ganin et al., 2016; Zhao et al., 2018). Using target domain data during training may induce useful non-linearities and may mitigate extrapolation by matching the target and source distributions, though the correctness of the learned mapping depends on the label distribution (Zhao et al., 2019).
Self-supervised learning on a large amount of unlabeled data can learn useful non-linearities beyond the labeled training data range (Chen et al., 2020; Devlin et al., 2019; He et al., 2020; Peters et al., 2018). Hence, our results suggest an explanation why pre-trained representations such as BERT improve out-of-distribution robustness (Hendrycks et al., 2020). In addition, self-supervised learning could map semantically similar data to similar representations, so some out-of-domain examples might fall inside the training distribution after the mapping.
Invariant models aim to learn features that respect specific invariances across multiple training distributions (Arjovsky et al., 2019; Rojas-Carulla et al., 2018; Zhou et al., 2021). If the model indeed learns these invariances, which can happen in the linear case and when there are confounders or anti-causal variables (Ahuja et al., 2021; Rosenfeld et al., 2021), this may essentially increase the training data range, since variations in the invariant features may be ignored by the model.
Distributional robustness considers small adversarial perturbations of the data distribution, and ensures that the model performs well under these (Goh & Sim, 2010; Sagawa et al., 2020; Sinha et al., 2018; Staib & Jegelka, 2019). We instead look at more global perturbations. Still, one would expect that modifications that help extrapolation in general also improve robustness to local perturbations.
6 CONCLUSION
This paper is an initial step towards formally understanding how neural networks trained by gradient descent extrapolate. We identify conditions under which MLPs and GNNs extrapolate as desired. We also suggest an explanation how GNNs have been able to extrapolate well in complex algorithmic tasks: encoding appropriate non-linearity in architecture and features can help extrapolation. Our results and hypothesis agree with empirical results, in this paper and in the literature.
ACKNOWLEDGMENTS
We thank Ruosong Wang, Tianle Cai, Han Zhao, Yuichi Yoshida, Takuya Konishi, Toru Lin, Weihua Hu, Matt J. Staib, Yichao Zhou, Denny Wu, Tianyi Yang, and Dingli (Leo) Yu for insightful discussions. This research was supported by NSF CAREER award 1553284, NSF III 1900933, and a Chevron-MIT Energy Fellowship. This research was also supported by JST ERATO JPMJER1201 and JSPS Kakenhi JP18H05291. MZ was supported by ODNI, IARPA, via the BETTER Program contract 2019-19051600005. The views, opinions, and/or findings contained in this article are those of the author and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the Department of Defense, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
A THEORETICAL BACKGROUND
In this section, we introduce theoretical background on neural tangent kernel (NTK), which draws an equivalence between the training dynamics of infinitely-wide (or ultra-wide) neural networks and that of kernel regression with respect to the neural tangent kernel.
Consider a general neural network f(θ,x) : X → R where θ ∈ Rm is the parameters in the network and x ∈ X is the input. Suppose we train the neural network by minimizing the squared loss over training data, `(θ) = 12 ∑n i=1(f(θ,xi)−yi)2, by gradient descent with infinitesimally small learning rate, i.e., dθ(t)dt = −∇`(θ(t)). Let u(t) = (f(θ(t),xi)) n i=1 be the network outputs. u(t) follows the dynamics
du(t)
dt = −H(t)(u(t)− y), (6)
whereH(t) is an n× n matrix whose (i, j)-th entry is
H(t)ij =
〈 ∂f(θ(t),xi)
∂θ , ∂f(θ(t),xj) ∂θ
〉 . (7)
A line of works show that for sufficiently wide networks,H(t) stays almost constant during training, i.e.,H(t) = H(0) in the limit (Arora et al., 2019a;b; Allen-Zhu et al., 2019a; Du et al., 2019c;a; Li & Liang, 2018; Jacot et al., 2018). Suppose network parameters are randomly initialized with certain scaling, as network width goes to infinity,H(0) converges to a fixed matrix, the neural tangent kernel (NTK) (Jacot et al., 2018):
NTK(x,x′) = E θ∼W
〈 ∂f(θ(t),x)
∂θ , ∂f(θ(t),x′) ∂θ
〉 , (8)
whereW is Gaussian. Therefore, the learning dynamics of sufficiently wide neural networks in this regime is equivalent to that of kernel gradient descent with respect to the NTK. This implies the function learned by a neural network at convergence on any specific training set, denoted by fNTK(x), can be precisely characterized, and is equivalent to the following kernel regression solution
fNTK(x) = (NTK(x,x1), ...,NTK(x,xn)) · NTK−1trainY , (9) where NTKtrain is the n × n kernel for training data, NTK(x,xi) is the kernel value between test data x and training data xi, and Y is the training labels.
We can in fact exactly calculate the neural tangent kernel matrix for certain architectures and activation functions. The exact formula of NTK with ReLU activation has been derived for feedforward neural networks (Jacot et al., 2018), convolutional neural networks (Arora et al., 2019b), and Graph Neural Networks (Du et al., 2019b).
Our theory builds upon this equivalence of network learning and kernel regression to more precisely characterize the function learned by a sufficiently-wide neural network given any specific training set. In particular, the difference between the learned function and true function over the domain of X determines the extrapolation error.
However, in general it is non-trivial to compute or analyze the functional form of what a neural network learns using Eqn. 9, because the kernel regression solution using neural tangent kernel only gives point-wise evaluation. Thus, we instead analyze the function learned by a network in the NTK’s induced feature space, because representations in the feature space would give a functional form.
Lemma 2 makes this connection more precise: the solution to the kernel regression using neural tangent kernel, which also equals over-parameterized network learning, is equivalent to a min-norm solution among functions in the NTK’s induced feature space that fits all training data. Here the min-norm refers to the RKHS norm. Lemma 2. Let φ(x) be a feature map induced by a neural tangent kernel, for any x ∈ Rd. The solution to kernel regression Eqn. 9 is equivalent to fNTK(x) = φ(x)>βNTK, where βNTK is
min β ‖β‖2
s.t. φ(xi)>β = yi, for i = 1, ..., n.
We prove Lemma 2 in Appendix B.6. To analyze the learned functions as the min-norm solution in feature space, we also need the explicit formula of an induced feature map of the corresponding neural tangent kernel. The following lemma gives a NTK feature space for two-layer MLPs with ReLU activation. It follows easily from the kernel formula described in Jacot et al. (2018); Arora et al. (2019b); Bietti & Mairal (2019). Lemma 3. An infinite-dimensional feature map φ(x) induced by the neural tangent kernel of a two-layer multi-layer perceptron with ReLU activation function is
φ (x) = c ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) , (10)
where w(k) ∼ N (0, I), with k going to infinity. c is a constant, and I is the indicator function.
We prove Lemma 3 in Appendix B.7. The feature maps for other architectures, e.g., Graph Neural Networks (GNNs) can be derived similarly. We analyze the Graph Neural Tangent Kernel (GNTK) for a simple GNN architecture in Theorem 3.
We then use Lemma 2 and 3 to characterize the properties of functions learned by an overparameterized neural network. We precisely characterize the neural networks’ learned functions in the NTK regime via solving the constrained optimization problem corresponding to the min-norm function in NTK feature space with the constraint of fitting the training data.
However, there still remains many technical challenges. For example, provable extrapolation (exact or asymptotic) is often not achieved with most training data distribution. Understanding the desirable condition requires significant insights into the geometry properties of training data distribution, and how they interact with the solution learned by neural networks. Our insights and refined analysis shows in Rd space, we need to consider the directions of training data. In graphs, we need to consider, in addition, the graph structure of training data. We refer readers to detailed proofs for the intuition of data conditions. Moreover, since NTK corresponds to infinitely wide neural networks, the feature space is of infinite dimension. The analysis of infinite dimensional spaces poses non-trivial technical challenges too.
Since different theorems have their respective challenges and insights/techniques, we refer the interested readers to the respective proofs for details. In Lemma 1 (proof in Appendix B.2), Theorem 2 (proof in Appendix B.3), and Theorem 1 (proof in Appendix B.1) we analyze over-parameterized MLPs. The proof of Corollary 1 is in Appendix B.4. In Theorem 3 we analyze Graph Neural Networks (proof in Appendix B.5).
B PROOFS
B.1 PROOF OF THEOREM 1
To show neural network outputs f(x) converge to a linear function along all directions v, we will analyze the function learned by a neural network on the training set {(xi, yi)}ni=1, by studying the functional representation in the network’s neural tangent kernel RKHS space.
Recall from Section A that in the NTK regime, i.e., networks are infinitely wide, randomly initialized, and trained by gradient descent with infinitesimally small learning rate, the learning dynamics of the neural network is equivalent to that of a kernel regression with respect to its neural tangent kernel.
For any x ∈ Rd, the network output is given by f(x) = (〈 φ(x), φ(x1) 〉 , ..., 〈 φ(x), φ(xn) 〉) · NTK−1trainY ,
where NTKtrain is the n× n kernel for training data, 〈 φ(x), φ(xi) 〉 is the kernel value between test data x and training data xi, and Y is training labels. By Lemma 2, the kernel regression solution is also equivalent to the min-norm solution in the NTK RKHS space that fits all training data
f(x) = φ(x)>βNTK, (11) where the representation coefficient βNTK is
min β ‖β‖2
s.t. φ(xi)>β = yi, for i = 1, ..., n.
The feature map φ(x) for a two-layer MLP with ReLU activation is given by Lemma 3 φ (x) = c′ ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) , (12)
where w(k) ∼ N (0, I), with k going to infinity. c′ is a constant, and I is the indicator function. Without loss of generality, we assume the bias term to be 1. For simplicity of notations, we denote each data x plus bias term by, i.e., x̂ = [x|1] (Bietti & Mairal, 2019), and assume constant term is 1. Given any direction v on the unit sphere, the network outputs for out-of-distribution data x0 = tv and x = x0 + hv = (1 + λ)x0, where we introduce the notation of x and λ for convenience, are given by Eqn. 11 and Eqn. 12
f(x̂0) =β > NTK ( x̂0 · I ( w(k) > x̂0 ≥ 0 ) ,w(k) > x̂0 · I ( w(k) > x̂0 ≥ 0 ) , ... ) ,
f(x̂) =β>NTK
( x̂ · I ( w(k) > x̂ ≥ 0 ) ,w(k) > x̂ · I ( w(k) > x̂ ≥ 0 ) , ... ) ,
where we have x̂0 = [x0|1] and x̂ = [(1 + λ)x0|1]. It follows that f(x̂)− f(x̂0) = β>NTK ( x̂ · I ( w(k) > x̂ ≥ 0 ) − x̂0 · I ( w(k) > x̂0 ≥ 0 ) , (13)
w(k) > x̂ · I ( w(k) > x̂ ≥ 0 ) −w(k) > x̂0 · I ( w(k) > x̂0 ≥ 0 ) , ... ) (14)
By re-arranging the terms, we get the following equivalent form of the entries: x̂ · I ( w>x̂ ≥ 0 ) − x̂0 · I ( w>x̂0 ≥ 0 ) (15)
= x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) + I ( w>x̂0 ≥ 0 )) − x̂0 · I ( w>x̂0 ≥ 0 ) (16)
= x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) + (x̂− x̂0) · I ( w>x̂0 ≥ 0 ) (17)
= [x|1] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) + [hv|0] · I ( w>x̂0 ≥ 0 ) (18)
Similarly, we have w>x̂ · I ( w>x̂ ≥ 0 ) −w>x̂0 · I ( w>x̂0 ≥ 0 ) (19)
= w>x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) + I ( w>x̂0 ≥ 0 )) −w>x̂0 · I ( w>x̂0 ≥ 0 ) (20)
= w>x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) +w> (x̂− x̂0) · I ( w>x̂0 ≥ 0 ) (21)
= w> [x|1] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) +w>[hv|0] · I ( w>x̂0 ≥ 0 ) (22)
Again, let us denote the part of βNTK corresponding to each w by βw. Moreover, let us denote the part corresponding to Eqn. 18 by β1w and the part corresponding to Eqn. 22 by β 2 w. Then we have
f(x̂)− f(x̂0) h
(23)
= ∫ β1 > w [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (24)
+ ∫ β1 > w [v|0] · I ( w>x̂0 ≥ 0 ) dP(w) (25)
+ ∫ β2w ·w> [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (26)
+ ∫ β2w ·w>[v|0] · I ( w>x̂0 ≥ 0 ) dP(w) (27)
Note that all βw are finite constants that depend on the training data. Next, we show that as t→∞, each of the terms above converges in O(1/ ) to some constant coefficient βv that depend on the training data and the direction v. Let us first consider Eqn. 25. We have∫
I ( w>x̂0 ≥ 0 ) dP(w) = ∫ I ( w>[x0|1] ≥ 0 ) dP(w) (28)
= ∫ I ( w>[x0/t|1/t] ≥ 0 ) dP(w) (29)
−→ ∫ I ( w>[v|0] ≥ 0 ) dP(w) as t→∞ (30)
Because β1w are finite constants, it follows that∫ β1 > w [v|0] · I ( w>x̂0 ≥ 0 ) dP(w)→ ∫ β1 > w [v|0] · I ( w>[v|0] ≥ 0 ) dP(w), (31)
where the right hand side is a constant that depends on training data and direction v. Next, we show the convergence rate for Eqn. 31. Given error > 0, because β1 >
w [v|0] are finite constants, we need to bound the following by C · for some constant C,
| ∫ I ( w>x̂0 ≥ 0 ) − I ( w>[v|0] ≥ 0 ) dP(w)| (32)
= | ∫ I ( w>[x0|1] ≥ 0 ) − I ( w>[x0|0] ≥ 0 ) dP(w)| (33)
Observe that the two terms in Eqn. 33 represent the volume of half-(balls) that are orthogonal to vectors [x0|1] and [x0|0]. Hence, Eqn. 33 is the volume of the non-overlapping part of the two (half)balls, which is created by rotating an angle θ along the last coordinate. By symmetry, Eqn. 33 is linear in θ. Moreover, the angle θ = arctan(C/t) for some constant C. Hence, it follows that
| ∫ I ( w>[x0|1] ≥ 0 ) − I ( w>[x0|0] ≥ 0 ) dP(w)| = C1 · arctan(C2/t) (34)
≤ C1 · C2/t (35) = O(1/t) (36)
In the last inequality, we used the fact that arctanx < x for x > 0. Hence, O(1/t) < implies t = O(1/ ) as desired. Next, we consider Eqn. 24.∫
β1 > w [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (37)
Let us first analyze the convergence of the following: | ∫ I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) dP(w)| (38)
= | ∫ I ( w>[(1 + λ)x0|1] ≥ 0 ) − I ( w>[x0|1] ≥ 0 ) dP(w)dP(w)| (39)
= | ∫ I ( w>[x0| 1
1 + λ ] ≥ 0
) − I ( w>[x0|1] ≥ 0 ) dP(w)dP(w)| → 0 (40)
The convergence to 0 follows from Eqn. 34. Now we consider the convergence rate. The angle θ is at most 1− 11+λ times of that in Eqn. 34. Hence, the rate is as follows(
1− 1 1 + λ
) ·O ( 1
t
) = λ 1 + λ ·O ( 1 t ) = h/t 1 + h/t ·O ( 1 t ) = O ( h (h+ t)t ) (41)
Now we get back to Eqn. 24, which simplifies as the following.∫ β1 >
w
[ v + tv
h | 1 h
] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (42)
We compare the rate of growth of left hand side and the rate of decrease of right hand side (indicators).
t h · h (h+ t)t = 1 h+ t → 0 as t→∞ (43)
1 h · h (h+ t)t =
1
(h+ t)t → 0 as t→∞ (44)
Hence, the indicators decrease faster, and it follows that Eqn. 24 converges to 0 with rate O( 1 ). Moreover, we can bound w with standard concentration techniques. Then the proofs for Eqn. 26 and Eqn. 27 follow similarly. This completes the proof.
B.2 PROOF OF LEMMA 1
Overview of proof. To prove exact extrapolation given the conditions on training data, we analyze the function learned by the neural network in a functional form. The network’s learned function can be precisely characterized by a solution in the network’s neural tangent kernel feature space which has a minimum RKHS norm among functions that can fit all training data, i.e., it corresponds to the optimum of a constrained optimization problem. We show that the global optimum of this constrained optimization problem, given the conditions on training data, is precisely the same function as the underlying true function.
Setup and preparation. LetX = {x1, ...,xn} and Y = {y1, ..., yn} denote the training set input features and their labels. Let βg ∈ Rd denote the true parameters/weights for the underlying linear function g, i.e.,
g(x) = β>g x for all x ∈ Rd
Recall from Section A that in the NTK regime, where networks are infinitely wide, randomly initialized, and trained by gradient descent with infinitesimally small learning rate, the learning dynamics of a neural network is equivalent to that of a kernel regression with respect to its neural tangent kernel. Moreover, Lemma 2 tells us that this kernel regression solution can be expressed in the functional form in the neural tangent kernel’s feature space. That is, the function learned by the neural network (in the ntk regime) can be precisely characterized as
f(x) = φ(x)>βNTK,
where the representation coefficient βNTK is
min β ‖β‖2 (45)
s.t. φ(xi)>β = yi, for i = 1, ..., n. (46)
An infinite-dimensional feature map φ(x) for a two-layer ReLU network is described in Lemma 3 φ (x) = c′ ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) ,
where w(k) ∼ N (0, I), with k going to infinity. c′ is a constant, and I is the indicator function. That is, there are infinitely many directionsw with Gaussian density, and each direction comes with two features. Without loss of generality, we can assume the scaling constant to be 1.
Constrained optimization in NTK feature space. The representation or weight of the neural network’s learned function in the neural tangent kernel feature space, βNTK, consists of weight vectors for each x · I ( w(k) > x ≥ 0 ) ∈ Rd and w(k)>x · I ( w(k) > x ≥ 0 ) ∈ R. For simplicity
of notation, we will use w to refer to a particular w, without considering the index (k), which does not matter for our purposes. For any w ∈ Rd, we denote by β̂w = (β̂(1)w , ..., β̂(d)w ) ∈ Rd the weight vectors corresponding to x · I ( w>x ≥ 0 ) , and denote by β̂′w ∈ Rd the weight for
w>x · I ( w>x ≥ 0 ) .
Observe that for any w ∼ N (0, I) ∈ Rd, any other vectors in the same direction will activate the same set of xi ∈ Rd. That is, if w>xi ≥ 0 for any w ∈ Rd, then (k ·w)>xi ≥ 0 for any k > 0. Hence, we can reload our notation to combine the effect of weights for w’s in the same direction. This enables simpler notations and allows us to change the distribution of w in NTK features from Gaussian distribution to uniform distribution on the unit sphere.
More precisely, we reload our notation by using βw and β′w to denote the combined effect of all weights (β̂(1)kw, ..., β̂ (d) kw) ∈ Rd and β̂′kw ∈ R for all kw with k > 0 in the same direction of w. That is, for each w ∼ Uni(unit sphere) ∈ Rd, we define β(j)w as the total effect of weights in the same direction
β(j)w = ∫ β̂(j)u I ( w>u
‖w‖ · ‖u‖ = 1
) dP(u), for j = [d] (47)
where u ∼ N (0, I). Note that to ensure the βw is a well-defined number, here we can work with the polar representation and integrate with respect to an angle. Then βw is well-defined. But for simplicity of exposition, we use the plain notation of integral. Similarly, we define β′w as reloading the notation of
β′w = ∫ β̂uI ( w>u
‖w‖ · ‖u‖ = 1 ) · ‖u‖ ‖w‖ dP(u) (48)
Here, in Eqn. 48 we have an extra term of ‖u‖‖w‖ compared to Eqn. 47 because the NTK features that Eqn. 48 corresponds to,w>x · I ( w>x ≥ 0 ) , has an extraw> term. So we need to take into account the scaling. This abstraction enables us to make claims on the high-level parameters βw and β′w only, which we will show to be sufficient to determine the learned function.
Then we can formulate the constrained optimization problem whose solution gives a functional form of the neural network’s learned function. We rewrite the min-norm solution in Eqn. 45 as
min β
∫ ( β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (49)
s.t. ∫
w>xi≥0
β>wxi + β ′ w ·w>xi dP(w) = β>g xi ∀i ∈ [n], (50)
where the density of w is now uniform on the unit sphere of Rd. Observe that since w is from a uniform distribution, the probability density function P(w) is a constant. This means every xi is activated by half of thew on the unit sphere, which implies we can now write the right hand side of Eqn. 50 in the form of left hand side, i.e., integral form. This allows us to further simplify Eqn. 50 as∫
w>xi≥0
( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n], (51)
where Eqn. 51 follows from the following steps of simplification∫ w>xi≥0 β(1)w x (1) i + ..β (d) w x (d) i + β ′ w ·w>xidP(w) = β(1)g x (1) i + ...β (d) g x (d) i ∀i ∈ [n],
⇐⇒ ∫
w>xi≥0
β(1)w x (1) i + ...+ β (d) w x (d) i + β ′ w ·w>xi dP(w)
= 1∫
w>xi≥0 dP(w)
· ∫
w>xi≥0
dP(w) · ( β(1)g x (1) i + ...+ β (d) g x (d) i ) ∀i ∈ [n],
⇐⇒ ∫
w>xi≥0
β(1)w x (1) i + ...+ β (d) w x (d) i + β ′ w ·w>xidP(w)
= 2 · ∫
w>xi≥0
β(1)g x (1) i + ...+ β (d) g x (d) i dP(w) ∀i ∈ [n],
⇐⇒ ∫
w>xi≥0
( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n].
Claim 1. Without loss of generality, assume the scaling factor c in NTK feature map φ(x) is 1. Then the global optimum to the constraint optimization problem Eqn. 49 subject to Eqn. 51, i.e.,
min β
∫ ( β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (52)
s.t. ∫
w>xi≥0
( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n]. (53)
satisfies βw + β′w ·w = 2βg for all w.
This claim implies the exact extrapolation we want to prove, i.e., fNTK(x) = g(x). This is because, if our claim holds, then for any x ∈ Rd
fNTK(x) = ∫ w>x≥0 β>wx+ β ′ w ·w>x dP(w)
= ∫ w>x≥0 2 · β>g x dP(w)
= ∫ w>x≥0 dP(w) · 2β>g x
= 1
2 · 2β>g x = g(x)
Thus, it remains to prove Claim 1. To compute the optimum to the constrained optimization problem Eqn. 52, we consider the Lagrange multipliers. It is clear that the objective Eqn. 52 is convex. Moreover, the constraint Eqn. 53 is affine. Hence, by KKT, solution that satisfies the Lagrange condition will be the global optimum. We compute the Lagrange multiplier as
L(β, λ) = ∫ (
β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (54)
− n∑ i=1 λi · ∫ w>xi≥0 ( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) (55) Setting the partial derivative of L(β, λ) with respect to each variable to zero gives
∂L ∂β (k) w = 2β(k)w P(w) + n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) = 0 (56)
∂L β′w = 2β′wP(w) + n∑ i=1 λi ·w>xi · I ( w>xi ≥ 0 ) = 0 (57) ∂L ∂λi = ∫ w>xi≥0 ( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 (58)
It is clear that the solution in Claim 1 immediately satisfies Eqn. 58. Hence, it remains to show there exist a set of λi for i ∈ [n] that satisfies Eqn. 56 and Eqn. 57. We can simplify Eqn. 56 as
β(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) , (59)
where c is a constant. Similarly, we can simplify Eqn. 57 as
β′w = c · n∑ i=1 λi ·w>xi · I ( w>xi ≥ 0 ) (60)
Observe that combining Eqn. 59 and Eqn. 60 implies that the constraint Eqn. 60 can be further simplified as
β′w = w >βw (61)
It remains to show that given the condition on training data, there exists a set of λi so that Eqn. 59 and Eqn. 61 are satisfied.
Global optimum via the geometry of training data. Recall that we assume our training data {(xi, yi)}ni=1 satisfies for any w ∈ Rd, there exist d linearly independent {xwi }di=1 ⊂ X , where X = {xi}ni=1, so that w>xwi ≥ 0 and −xwi ∈X for i = 1..d, e.g., an orthogonal basis of Rd and their opposite vectors. We will show that under this data regime, we have
(a) for any particular w, there indeed exist a set of λi that can satisfy the constraints Eqn. 59 and Eqn. 61 for this particular w.
(b) For any w1 and w2 that activate the exact same set of {xi}, the same set of λi can satisfy the constraints Eqn. 59 and Eqn. 61 of both w1 and w2.
(c) Whenever we rotate a w1 to a w2 so that the set of xi being activated changed, we can still find λi that satisfy constraint of both w1 and w2.
Combining (a), (b) and (c) implies there exists a set of λ that satisfy the constraints for all w. Hence, it remains to show these three claims.
We first prove Claim (a). For each w, we must find a set of λi so that the following hold.
β(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) , β′w = w >βw βw + β ′ w ·w = 2βg
Here, βg and w are fixed, and w is a vector on the unit sphere. It is easy to see that βw is then determined by βg and w, and there indeed exists a solution (solving a consistent linear system). Hence we are left with a linear system with d linear equations
β(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) ∀k ∈ [d]
to solve with free variables being λi so that w activates xi, i.e., w>xi ≥ 0. Because the training data {(xi, yi)}ni=1 satisfies for any w, there exist at least d linearly independent xi that activate w. This guarantees for any w we must have at least d free variables. It follows that there must exist solutions λi to the linear system. This proves Claim (a).
Next, we show that (b) for anyw1 andw2 that activate the exact same set of {xi}, the same set of λi can satisfy the constraints Eqn. 59 and Eqn. 61 of bothw1 andw2. Becausew1 andw2 are activated by the same set of xi, this implies
βw1 = c · n∑ i=1 λi · xi · I ( w>1 xi ≥ 0 ) = c · n∑ i=1 λi · xi · I ( w>2 xi ≥ 0 ) = βw2
Since λi already satisfy constraint Eqn. 59 for w1, they also satisfy that for w2. Thus, it remains to show that βw1 + β ′ w1 · w1 = βw2 + β ′ w2 · w1 assuming βw1 = βw2 , β ′ w1 = w > 1 βw1 , and β′w2 = w > 2 βw2 . This indeed holds because
βw1 + β ′ w1 ·w1 = βw2 + β ′ w2 ·w2
⇐⇒ β′w1 ·w > 1 = β ′ w2 ·w > 2 ⇐⇒ w>1 βw1w>1 = w>2 βw2w>2 ⇐⇒ w>1 w1β>w1 = w > 2 w2β > w2 ⇐⇒ 1 · β>w1 = 1 · β > w2
⇐⇒ βw1 = βw1 Here, we used the fact that w1 and w2 are vectors on the unit sphere. This proves Claim (b).
Finally, we show (c) that Whenever we rotate a w1 to a w2 so that the set of xi being activated changed, we can still find λi that satisfy constraint of both w1 and w2. Suppose we rotate w1 to w2 so that w2 lost activation with x1,x2, ...,xp which in the set of linearly independent xi’s being activated byw1 and their opposite vectors −xi are also in the training set (without loss of generality). Then w2 must now also get activated by −x1,−x2, ...,−xp. This is because if w>2 xi < 0, we must have w>2 (−xi) > 0. Recall that in the proof of Claim (a), we only needed the λi from linearly independent xi that we used to solve the linear systems, and their opposite as the free variables to solve the linear system of
d equations. Hence, we can set λ to 0 for the other xi while still satisfying the linear system. Then, suppose there exists λi that satisfy
β(k)w1 = c · d∑ i=1 λi · x(k)i
where the xi are the linearly independent vectors that activatew1 with opposite vectors in the training set, which we have proved in (a). Then we can satisfy the constraint for βw2 below
β(k)w2 = c · p∑ i=1 λ̂i · (−xi)(k) + d∑ i=p+1 λi · x(k)i
by setting λ̂i = −λi for i = 1...p. Indeed, this gives
β(k)w2 = c · p∑ i=1 (−λi) · (−xi)(k) + d∑ i=p+1 λi · x(k)i
= c · d∑ i=1 λi · x(k)i
Thus, we can also find λi that satisfy the constraint for βw2 . Here, we do not consider the case where w2 is parallel with an xi because such w2 has measure zero. Note that we can apply this argument iteratively because the flipping the sign always works and will not create any inconsistency.
Moreover, we can show that the constraint for β′w2 is satisfied by a similar argument as in proof of Claim (b). This follows from the fact that our construction makes βw1 = βw2 . Then we can follow the same argument as in (b) to show that βw1 + β ′ w1 ·w1 = βw2 + β ′ w2 ·w1. This completes the proof of Claim (c).
In summary, combining Claim (a), (b) and (c) gives that Claim 1 holds. That is, given our training data, the global optimum to the constrained optimization problem of finding the min-norm solution among functions that fit the training data satisfies βw+β′w ·w = 2βg . We also showed that this claim implies exact extrapolation, i.e., the network’s learned function f(x) is equal to the true underlying function g(x) for all x ∈ Rd. This completes the proof.
B.3 PROOF OF THEOREM 2
Proof of the asymptotic convergence to extrapolation builds upon our proof of exact extrapolation, i.e., Lemma 1. The proof idea is that if the training data distribution has support at all directions, when the number of samples n→∞, asymptotically the training set will converge to some imaginary training set that satisfies the condition for exact extrapolation. Since if training data are close the neural tangent kernels are also close, the predictions or learned function will converge to a function that achieves perfect extrapolation, that is, the true underlying function.
Asymptotic convergence of data sets. We first show the training data converge to a data set that satisfies the exact extrapolation condition in Lemma 1. Suppose training data {xi}ni=1 are sampled from a distribution whose support contains a connected set S that intersects all directions, i.e., for any non-zero w ∈ Rd, there exists k > 0 so that kw ∈ S. Let us denote by S the set of datasets that satisfy the condition in Lemma 1. In fact, we will use a relaxed condition in the proof of Lemma 1 (Lemma 1 in the main text uses a stricter condition for simplicity of exposition). Given a general dataset X and a dataset S ∈ S of the same size n, let σ(X,S) denote a matching of their data points, i.e., σ outputs a sequence of pairs
σ(X,S)i = (xi, si) for i ∈ [n] s.t. X = {xi}ni=1
S = {si}ni=1
Let ` : Rd × Rd → R be the l2 distance that takes in a pair of points. We then define the distance between the datasets d(X,S) as the minimum sum of l2 distances of their data points over all
possible matching.
d(X,S) = minσ n∑ i=1 ` (σ (X,S)i) |X| = |S| = n
∞ |X| 6= |S|
We can then define a “closest distance to perfect dataset” function D∗ : X → R which maps a dataset X to the minimum distance ofX to any dataset in S
D∗ (X) = min S∈S d (X,S)
It is easy to see that for any datasetX = {xi}ni=1, D∗ (X) can be bounded by the minimum of the closest distance to perfect dataset D∗ of sub-datasets ofX of size 2d.
D∗ ({xi}ni=1) ≤ bn/2dc min k=1 D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) (62)
This is because for any S ∈ S, and any S ⊆ S′, we must have S′ ∈ S because a dataset satisfies exact extrapolation condition as long as it contains some key points. Thus, adding more data will not hurt, i.e., for anyX1 ⊆X2, we always have
D∗ (X1) ≤ D∗ (X2)
Now let us denote by Xn a random dataset of size n where each xi ∈ Xn is sampled from the training distribution. Recall that our training data {xi}ni=1 are sampled from a distribution whose support contains a connected set S∗ that intersects all directions, i.e., for any non-zerow ∈ Rd, there exists k > 0 so that kw ∈ S∗. It follows that for a random dataset X2d of size 2d, the probability that D∗(X2d) > happens is less than 1 for any > 0. First there must exist S0 = {si}2di=1 ∈ S of size 2d, e.g., orthogonal basis and their opposite vectors. Observe that if we scale any si by k > 0, the resulting dataset is still in S by the definition of S . We denote the set of datasets where we are allowed to scale elements of S0 by S0. It follows that
P (D∗(X2d) > ) = P (
min S∈S d (X2d,S) > ) ≤ P ( min S∈S0 d (X2d,S) >
) = P ( min S∈S0 min σ n∑ i=1 ` (σ (X2d,S)i) > )
= 1− P ( min S∈S0 min σ n∑ i=1 ` (σ (X2d,S)i) ≤ )
≤ 1− P (
min S∈S0 min σ n max i=1
` (σ (X2d,S)i) ≤ )
≤ δ < 1
where we denote the bound of P (D∗(X2d) > ) by δ < 1, and the last step follows from P (
min S∈S0 min σ n max i=1
` (σ (X2d,S)i) ≤ ) > 0
which further follows from the fact that for any si ∈ S0, by the assumption on training distribution, we can always find k > 0 so that ksi ∈ S∗, a connected set in the support of training distribution. By the connectivity of support S∗, ksi cannot be an isolated point in S∗, so for any > 0, we must have∫
‖x−ksi‖≤ ,x∈S∗
fX(x)dx > 0
Hence, we can now apply Eqn. 62 to bound D∗(Xn). Given any > 0, we have
P (D∗(Xn) > ) = 1− P (D∗(Xn) ≤ ) ≤ 1− P (bn/2dc
min k=1
D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) ≤ )
≤ 1− 1− bn/2dc∏ k=1 P ( D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) > )
= bn/2dc∏ k=1 P ( D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) > ) ≤ δbn/2dc
Here δ < 1. This implies D∗(Xn) p−→ 0, i.e.,
lim n→∞
P (D∗(Xn) > ) = 0 ∀ > 0 (63)
Eqn. 63 says as the number of training samples n→∞, our training set will converge in probability to a dataset that satisfies the requirement for exact extrapolation.
Asymptotic convergence of predictions. Let NTK(x,x′) : Rd × Rd → R denote the neural tangent kernel for a two-layer ReLU MLP. It is easy to see that if x → x∗, then NTK(x, ·) → NTK(x∗, ·) (Arora et al. (2019b)). Let NTKtrain denote the n× n kernel matrix for training data. We have shown that our training set converges to a perfect data set that satisfies conditions of exact extrapolation. Moreover, note that our training set will only have a finite number of (not increase with n) xi that are not precisely the same as those in a perfect dataset. This is because a perfect data only contains a finite number of key points and the other points can be replaced by any other points while still being a perfect data set. Thus, we have NTKtrain → N∗, where N∗ is the n× n NTK matrix for some perfect data set.
Because neural tangent kernel is positive definite, we have NTK−1train → N∗ −1
. Recall that for any x ∈ Rd, the prediction of NTK is
fNTK(x) = (NTK(x | 1. What is the focus of the paper regarding MLPs and GNNs?
2. What are the strengths of the paper, particularly in its presentation and contribution to theoretical analysis?
3. What are some interesting results shown in the paper regarding MLPs and GNNs?
4. How does the reviewer suggest improving extrapolation in GNNs?
5. What are some limitations of the study scope that the reviewer mentions? | Review | Review
This paper analyzes the extrapolate ability of MLPs and GNNs. In contrast to the existing theoretical works that focus on generalizability and capacity of these models, this paper emphasizes the behavior of training algorithm using gradient descent. It takes analogy of kernel regression via the neural tangent kernel as an example to study the bias induced by the gradient descent algorithm. The presentation of this paper is clear and well-organized with the most significant result shown in the first section, raising interest of the readers, as opposed to leaving them behind a massive amount of proofs. The contribution of this paper is significant as well since it draws attention of the researcher to theoretical analysis on the bias induced from the implementations of the algorithms as compared to the theoretical analysis on the model structure itself. Model extrapolation is also closely connected to topics such as meta-learning, multi-task learning, domain adaptation and semi-supervised learning since the ability of model extrapolation will limit its performance when applied to other tasks.
Pros:
This paper has shown some interesting results: for instance, MLP with ReLU trained by GD will converge to linear functions along any direction from origin outside the support of the training data. This coincide with the idea that MLP are piecewise linear in different regions. The proof is complicated though and requires the analogy to the kernel regression as basis. This result seems to suggest that the learning of MLP on data manifold supported by training data is also local linear and without support of training data, the induction follows the inertia of linearity. It is curious to see if this is due to the piecewise linearity of ReLU function. Maybe we will have better nonlinear extrapolation for MLP using tanh and other sigmoid functions.
Comparison between GNN and Dynamic programming algorithm is very intuitive and inspiring. It suggests that max/min aggregate as opposed to more commonly used sum-aggregate in GNN is more suitable for extrapolation and the similarity between max/min aggregate GNN and DP is also very convincing. In general, this paper built up a good intuition before diving into the proof, which is well-appreciated.
The suggestion to improve extrapolation is to put the nonlinearity into the architecture of the GNN or into the input representation is useful. For instance, replacing sum-aggregate to min/max aggregate helps to achieve good extrapolation. It also explains why the pre-trained embeddings such as BERT can be used in other tasks and still extrapolate well.
Suggestions:
Limitations of the study scope. This paper only discuss results of neural network using ReLU and GD. Although GD is widely used, the ReLU as the activation function plays a critical role in the study of extrapolation. It is necessary to provide analysis on the use of other common activation function to understand if the extrapolation ability is expanded.
It is interesting to see more connection with domain adaptation and semi-supervised learning as well. |
ICLR | Title
How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks
Abstract
We study how neural networks trained by gradient descent extrapolate, i.e., what they learn outside the support of the training distribution. Previous works report mixed empirical results when extrapolating with neural networks: while feedforward neural networks, a.k.a. multilayer perceptrons (MLPs), do not extrapolate well in certain simple tasks, Graph Neural Networks (GNNs) – structured networks with MLP modules – have shown some success in more complex tasks. Working towards a theoretical explanation, we identify conditions under which MLPs and GNNs extrapolate well. First, we quantify the observation that ReLU MLPs quickly converge to linear functions along any direction from the origin, which implies that ReLU MLPs do not extrapolate most nonlinear functions. But, they can provably learn a linear target function when the training distribution is sufficiently “diverse”. Second, in connection to analyzing the successes and limitations of GNNs, these results suggest a hypothesis for which we provide theoretical and empirical evidence: the success of GNNs in extrapolating algorithmic tasks to new data (e.g., larger graphs or edge weights) relies on encoding task-specific non-linearities in the architecture or features. Our theoretical analysis builds on a connection of over-parameterized networks to the neural tangent kernel. Empirically, our theory holds across different training settings.
1 INTRODUCTION
Humans extrapolate well in many tasks. For example, we can apply arithmetics to arbitrarily large numbers. One may wonder whether a neural network can do the same and generalize to examples arbitrarily far from the training data (Lake et al., 2017). Curiously, previous works report mixed extrapolation results with neural networks. Early works demonstrate feedforward neural networks, a.k.a. multilayer perceptrons (MLPs), fail to extrapolate well when learning simple polynomial functions (Barnard & Wessels, 1992; Haley & Soloway, 1992). However, recent works show Graph Neural Networks (GNNs) (Scarselli et al., 2009), a class of structured networks with MLP building blocks, can generalize to graphs much larger than training graphs in challenging algorithmic tasks, such as predicting the time evolution of physical systems (Battaglia et al., 2016), learning graph algorithms (Velickovic et al., 2020), and solving mathematical equations (Lample & Charton, 2020).
To explain this puzzle, we formally study how neural networks trained by gradient descent (GD) extrapolate, i.e., what they learn outside the support of training distribution. We say a neural network extrapolates well if it learns a task outside the training distribution. At first glance, it may seem that neural networks can behave arbitrarily outside the training distribution since they have high capacity (Zhang et al., 2017) and are universal approximators (Cybenko, 1989; Funahashi, 1989; Hornik et al., 1989; Kurkova, 1992). However, neural networks are constrained by gradient descent training (Hardt et al., 2016; Soudry et al., 2018). In our analysis, we explicitly consider such implicit bias through the analogy of the training dynamics of over-parameterized neural networks and kernel regression via the neural tangent kernel (NTK) (Jacot et al., 2018).
Starting with feedforward networks, the simplest neural networks and building blocks of more complex architectures such as GNNs, we establish that the predictions of over-parameterized MLPs with ReLU activation trained by GD converge to linear functions along any direction from the origin. We prove a convergence rate for two-layer networks and empirically observe that convergence often occurs close to the training data (Figure 1), which suggests ReLU MLPs cannot extrapolate well for most nonlinear tasks. We emphasize that our results do not follow from the fact that ReLU networks have finitely many linear regions (Arora et al., 2018; Hanin & Rolnick, 2019; Hein et al., 2019). While having finitely many linear regions implies ReLU MLPs eventually become linear, it does not say whether MLPs will learn the correct target function close to the training distribution. In contrast, our results are non-asymptotic and quantify what kind of functions MLPs will learn close to the training distribution. Second, we identify a condition when MLPs extrapolate well: the task is linear and the geometry of the training distribution is sufficiently “diverse”. To our knowledge, our results are the first extrapolation results of this kind for feedforward neural networks.
We then relate our insights into feedforward neural networks to GNNs, to explain why GNNs extrapolate well in some algorithmic tasks. Prior works report successful extrapolation for tasks that can be solved by dynamic programming (DP) (Bellman, 1966), which has a computation structure aligned with GNNs (Xu et al., 2020). DP updates can often be decomposed into nonlinear and linear steps. Hence, we hypothesize that GNNs trained by GD can extrapolate well in a DP task, if we encode appropriate non-linearities in the architecture and input representation (Figure 2). Importantly, encoding non-linearities may be unnecessary for GNNs to interpolate, because the MLP modules can easily learn many nonlinear functions inside the training distribution (Cybenko, 1989; Hornik et al., 1989; Xu et al., 2020), but it is crucial for GNNs to extrapolate correctly. We prove this hypothesis for a simplified case using Graph NTK (Du et al., 2019b). Empirically, we validate the hypothesis on three DP tasks: max degree, shortest paths, and n-body problem. We show GNNs with appropriate architecture, input representation, and training distribution can predict well on graphs with unseen sizes, structures, edge weights, and node features. Our theory explains the empirical success in previous works and suggests their limitations: successful extrapolation relies on encoding task-specific non-linearities, which requires domain knowledge or extensive model search. From a broader standpoint, our insights go beyond GNNs and apply broadly to other neural networks.
To summarize, we study how neural networks extrapolate. First, ReLU MLPs trained by GD converge to linear functions along directions from the origin with a rate of O(1/t). Second, to explain why GNNs extrapolate well in some algorithmic tasks, we prove that ReLU MLPs can extrapolate well in linear tasks, leading to a hypothesis: a neural network can extrapolate well when appropriate nonlinearities are encoded into the architecture and features. We prove this hypothesis for a simplified case and provide empirical support for more general settings.
1.1 RELATED WORK
Early works show example tasks where MLPs do not extrapolate well, e.g. learning simple polynomials (Barnard & Wessels, 1992; Haley & Soloway, 1992). We instead show a general pattern of how ReLU MLPs extrapolate and identify conditions for MLPs to extrapolate well. More recent works study the implicit biases induced on MLPs by gradient descent, for both the NTK and mean field regimes (Bietti & Mairal, 2019; Chizat & Bach, 2018; Song et al., 2018). Related to our results, some works show MLP predictions converge to “simple” piecewise linear functions, e.g., with few linear regions (Hanin & Rolnick, 2019; Maennel et al., 2018; Savarese et al., 2019; Williams et al., 2019). Our work differs in that none of these works explicitly studies extrapolation, and some focus only on one-dimensional inputs. Recent works also show that in high-dimensional settings of the NTK regime, MLP is asymptotically at most a linear predictor in certain scaling limits (Ba et al., 2020; Ghorbani et al., 2019). We study a different setting (extrapolation), and our analysis is non-asymptotic in nature and does not rely on random matrix theory.
Prior works explore GNN extrapolation by testing on larger graphs (Battaglia et al., 2018; Santoro et al., 2018; Saxton et al., 2019; Velickovic et al., 2020). We are the first to theoretically study GNN extrapolation, and we complete the notion of extrapolation to include unseen features and structures.
2 PRELIMINARIES
We begin by introducing our setting. Let X be the domain of interest, e.g., vectors or graphs. The task is to learn an underlying function g : X → R with a training set {(xi, yi)}ni=1 ⊂ D, where yi = g(xi) and D is the support of training distribution. Previous works have extensively studied in-distribution generalization where the training and the test distributions are identical (Valiant, 1984; Vapnik, 2013); i.e., D = X . In contrast, extrapolation addresses predictions on a domain X that is larger than the support of the training distribution D. We will say a model extrapolates well if it has a small extrapolation error.
Definition 1. (Extrapolation error). Let f : X → R be a model trained on {(xi, yi)}ni=1 ⊂ D with underlying function g : X → R. Let P be a distribution over X \ D and let ` : R× R→ R be a loss function. We define the extrapolation error of f as Ex∼P [`(f(x), g(x))].
We focus on neural networks trained by gradient descent (GD) or its variants with squared loss. We study two network architectures: feedforward and graph neural networks.
Graph Neural Networks. GNNs are structured networks operating on graphs with MLP modules (Battaglia et al., 2018; Xu et al., 2019). Let G = (V,E) be a graph. Each node u ∈ V has a feature vector xu, and each edge (u, v) ∈ E has a feature vector w(u,v). GNNs recursively compute node representations h(k)u at iteration k (Gilmer et al., 2017; Xu et al., 2018). Initially, h (0) u = xu. For k = 1..K, GNNs update h(k)u by aggregating the neighbor representations. We can optionally compute a graph representation hG by aggregating the final node representations. That is,
h(k)u = ∑
v∈N (u)
MLP(k) ( h(k−1)u ,h (k−1) v ,w(v,u) ) , hG = MLP(K+1) (∑ u∈G h(K)u ) . (1)
The final output is the graph representation hG or final node representations h (K) u depending on the task. We refer to the neighbor aggregation step for h(k)u as aggregation and the pooling step in hG as readout. Previous works typically use sum-aggregation and sum-readout (Battaglia et al., 2018). Our results indicate why replacing them may help extrapolation (Section 4).
3 HOW FEEDFORWARD NEURAL NETWORKS EXTRAPOLATE
Feedforward networks are the simplest neural networks and building blocks of more complex architectures such as GNNs, so we first study how they extrapolate when trained by GD. Throughout the paper, we assume ReLU activation. Section 3.3 contains preliminary results for other activations.
3.1 LINEAR EXTRAPOLATION BEHAVIOR OF RELU MLPS
By architecture, ReLU networks learn piecewise linear functions, but what do these regions precisely look like outside the support of the training data? Figure 1 illustrates examples of how ReLU MLPs extrapolate when trained by GD on various nonlinear functions. These examples suggest that outside the training support, the predictions quickly become linear along directions from the origin. We systematically verify this pattern by linear regression on MLPs’ predictions: the coefficient of determination (R2) is always greater than 0.99 (Appendix C.2). That is, ReLU MLPs “linearize” almost immediately outside the training data range.
We formalize this observation using the implicit biases of neural networks trained by GD via the neural tangent kernel (NTK): optimization trajectories of over-parameterized networks trained by GD are equivalent to those of kernel regression with a specific neural tangent kernel, under a set of assumptions called the “NTK regime” (Jacot et al., 2018). We provide an informal definition here; for further details, we refer the readers to Jacot et al. (2018) and Appendix A.
Definition 2. (Informal) A neural network trained in the NTK regime is infinitely wide, randomly initialized with certain scaling, and trained by GD with infinitesimal steps.
Prior works analyze optimization and in-distribution generalization of over-parameterized neural networks via NTK (Allen-Zhu et al., 2019a;b; Arora et al., 2019a;b; Cao & Gu, 2019; Du et al., 2019c;a; Li & Liang, 2018; Nitanda & Suzuki, 2021). We instead analyze extrapolation.
Theorem 1 formalizes our observation from Figure 1: outside the training data range, along any direction tv from the origin, the prediction of a two-layer ReLU MLP quickly converges to a linear
function with rate O( 1t ). The linear coefficients βv and the constant terms in the convergence rate depend on the training data and direction v. The proof is in Appendix B.1. Theorem 1. (Linear extrapolation). Suppose we train a two-layer ReLU MLP f : Rd → R with squared loss in the NTK regime. For any direction v ∈ Rd, let x0 = tv. As t → ∞, f(x0 + hv)− f(x0)→ βv · h for any h > 0, where βv is a constant linear coefficient. Moreover, given > 0, for t = O( 1 ), we have |
f(x0+hv)−f(x0) h − βv| < .
ReLU networks have finitely many linear regions (Arora et al., 2018; Hanin & Rolnick, 2019), hence their predictions eventually become linear. In contrast, Theorem 1 is a more fine-grained analysis of how MLPs extrapolate and provides a convergence rate. While Theorem 1 assumes two-layer networks in the NTK regime, experiments confirm that the linear extrapolation behavior happens across networks with different depths, widths, learning rates, and batch sizes (Appendix C.1 and C.2). Our proof technique potentially also extends to deeper networks.
Theorem 1 implies which target functions a ReLU MLP may be able to match outside the training data: only functions that are almost-linear along the directions away from the origin. Indeed, Figure 4a shows ReLU MLPs do not extrapolate target functions such as x>Ax (quadratic), ∑d i=1 cos(2π ·x(i))
(cos), and ∑d i=1 √ x(i) (sqrt), where x(i) is the i-th dimension of x. With suitable hyperparameters, MLPs extrapolate the L1 norm correctly, which satisfies the directional linearity condition.
Figure 4a provides one more positive result: MLPs extrapolate linear target functions well, across many different hyperparameters. While learning linear functions may seem very limited at first, in Section 4 this insight will help explain extrapolation properties of GNNs in non-linear practical tasks. Before that, we first theoretically analyze when MLPs extrapolate well.
3.2 WHEN RELU MLPS PROVABLY EXTRAPOLATE WELL
Figure 4a shows that MLPs can extrapolate well when the target function is linear. However, this is not always true. In this section, we show that successful extrapolation depends on the geometry of training data. Intuitively, the training distribution must be “diverse” enough for correct extrapolation.
We provide two conditions that relate the geometry of the training data to extrapolation. Lemma 1 states that over-parameterized MLPs can learn a linear target function with only 2d examples. Lemma 1. Let g(x) = β>x be the target function for β ∈ Rd. Suppose {xi}ni=1 contains an orthogonal basis {x̂i}di=1 and {−x̂i}di=1. If we train a two-layer ReLU MLP f on {(xi, yi)}ni=1 with squared loss in the NTK regime, then f(x) = β>x for all x ∈ Rd.
Lemma 1 is mainly of theoretical interest, as the 2d examples need to be carefully chosen. Theorem 2 builds on Lemma 1 and identifies a more practical condition for successful extrapolation: if the support of the training distribution covers all directions (e.g., a hypercube that covers the origin), the MLP converges to a linear target function with sufficient training data. Theorem 2. (Conditions for extrapolation). Let g(x) = β>x be the target function for β ∈ Rd. Suppose {xi}ni=1 is sampled from a distribution whose support D contains a connected subset S, where for any non-zero w ∈ Rd, there exists k > 0 so that kw ∈ S. If we train a two-layer ReLU MLP f : Rd → R on {(xi, yi)}ni=1 with squared loss in the NTK regime, f(x) p−→ β>x as n→∞.
Experiments: geometry of training data affects extrapolation. The condition in Theorem 2 formalizes the intuition that the training distribution must be “diverse” for successful extrapolation, e.g., D includes all directions. Empirically, the extrapolation error is indeed small when the condition of Theorem 2 is satisfied (“all” in Figure 4b). In contrast, the extrapolation error is much larger when the training examples are restricted to only some directions (Figure 4b and Figure 3).
Relating to previous works, Theorem 2 suggests why spurious correlations may hurt extrapolation, complementing the causality arguments (Arjovsky et al., 2019; Peters et al., 2016; Rojas-Carulla et al., 2018). When the training data has spurious correlations, some combinations of features are missing; e.g., camels might only appear in deserts in an image collection. Therefore, the condition for Theorem 2 no longer holds, and the model may extrapolate incorrectly. Theorem 2 is also analogous to an identifiability condition for linear models, but stricter. We can uniquely identify a linear function if the training data has full (feature) rank. MLPs are more expressive, so identifying the linear target function requires additional constraints.
To summarize, we analyze how ReLU MLPs extrapolate and provide two insights: (1) MLPs cannot extrapolate most nonlinear tasks due to their linear extrapolation (Theorem 1); and (2) MLPs extrapolate well when the target function is linear, if the training distribution is “diverse” (Theorem 2). In the next section, these results help us understand how more complex networks extrapolate.
3.3 MLPS WITH OTHER ACTIVATION FUNCTIONS
Before moving on to GNNs, we complete the picture of MLPs with experiments on other activation functions: tanh σ(x) = tanh(x), cosine σ(x) = cos(x) (Lapedes & Farber, 1987; McCaughan, 1997; Sopena & Alquezar, 1994), and quadratic σ(x) = x2 (Du & Lee, 2018; Livni et al., 2014). Details are in Appendix C.4. MLPs extrapolate well when the activation and target function are similar; e.g., tanh activation extrapolates well when learning tanh, but not other functions (Figure 5). Moreover, each activation function has different limitations. To extrapolate the tanh function with tanh activation, the training data range has to be sufficiently wide. When learning a quadratic function with quadratic activation, only two-layer networks extrapolate well as more layers lead to higher-order polynomials. Cosine activations are hard to optimize for high-dimensional data, so we only consider one/two dimensional cosine target functions.
4 HOW GRAPH NEURAL NETWORKS EXTRAPOLATE
Above, we saw that extrapolation in nonlinear tasks is hard for MLPs. Despite this limitation, GNNs have been shown to extrapolate well in some nonlinear algorithmic tasks, such as intuitive physics (Battaglia et al., 2016; Janner et al., 2019), graph algorithms (Battaglia et al., 2018; Velickovic et al., 2020), and symbolic mathematics (Lample & Charton, 2020). To address this discrepancy, we build on our MLP results and study how GNNs trained by GD extrapolate.
4.1 HYPOTHESIS: LINEAR ALGORITHMIC ALIGNMENT HELPS EXTRAPOLATION
We start with an example: training GNNs to solve the shortest path problem. For this task, prior works observe that a modified GNN architecture with min-aggregation can generalize to graphs larger than those in the training set (Battaglia et al., 2018; Velickovic et al., 2020):
h(k)u = min v∈N (u)
MLP(k) ( h(k−1)u ,h (k−1) v ,w(v,u) ) . (2)
We first provide an intuitive explanation (Figure 2a). Shortest path can be solved by the Bellman-Ford (BF) algorithm (Bellman, 1958) with the following update:
d[k][u] = min v∈N (u)
d[k − 1][v] +w(v, u), (3)
where w(v, u) is the weight of edge (v, u), and d[k][u] is the shortest distance to node u within k steps. The two equations can be easily aligned: GNNs simulate the BF algorithm if its MLP modules learn a linear function d[k−1][v]+w(v, u). Since MLPs can extrapolate linear tasks, this “alignment” may explain why min-aggregation GNNs can extrapolate well in this task.
For comparison, we can reason why we would not expect GNNs with the more commonly used sum-aggregation (Eqn. 1) to extrapolate well in this task. With sum-aggregation, the MLP modules need to learn a nonlinear function to simulate the BF algorithm, but Theorem 1 suggests that they will not extrapolate most nonlinear functions outside the training support.
We can generalize the above intuition to other algorithmic tasks. Many tasks where GNNs extrapolate well can be solved by dynamic programming (DP) (Bellman, 1966), an algorithmic paradigm with a recursive structure similar to GNNs’ (Eqn. 1) (Xu et al., 2020).
Definition 3. Dynamic programming (DP) is a recursive procedure with updates
Answer[k][s] = DP-Update({Answer[k − 1][s′]} , s′ = 1...n), (4)
where Answer[k][s] is the solution to a sub-problem indexed by iteration k and state s, and DP-Update is a task-specific update function that solves the sub-problem based on the previous iteration.
From a broader standpoint, we hypothesize that: if we encode appropriate non-linearities into the model architecture and input representations so that the MLP modules only need to learn nearly linear steps, then the resulting neural network can extrapolate well.
Hypothesis 1. (Linear algorithmic alignment). Let f : X → R be the underlying function and N a neural network with m MLP modules. Suppose there exist m linear functions {gi}mi=1 so that by replacing N ’s MLP modules with gi’s, N simulates f . Given > 0, there exists {(xi, f(xi))}ni=1 ⊂ D ( X so that N trained on {(xi, f(xi))}ni=1 by GD with squared loss learns f̂ with ‖f̂ − f‖ < .
Our hypothesis builds on the algorithmic alignment framework of (Xu et al., 2020), which states that a neural network interpolates well if the modules are “aligned” to easy-to-learn (possibly nonlinear) functions. Successful extrapolation is harder: the modules need to align with linear functions.
Applications of linear algorithmic alignment. In general, linear algorithmic alignment is not restricted to GNNs and applies broadly to neural networks. To satisfy the condition, we can encode appropriate nonlinear operations in the architecture or input representation (Figure 2). Learning DP algorithms with GNNs is one example of encoding non-linearity in the architecture (Battaglia et al., 2018; Corso et al., 2020). Another example is to encode log-and-exp transforms in the architecture to help extrapolate multiplication in arithmetic tasks (Trask et al., 2018; Madsen & Johansen, 2020). Neural symbolic programs take a step further and encode a library of symbolic operations to help extrapolation (Johnson et al., 2017; Mao et al., 2019; Yi et al., 2018).
For some tasks, it may be easier to change the input representation (Figure 2b). Sometimes, we can decompose the target function f as f = g ◦ h into a feature embedding h and a “simpler” target function g that our model can extrapolate well. We can obtain h via specialized features or feature transforms using domain knowledge (Lample & Charton, 2020; Webb et al., 2020), or via representation learning (e.g., BERT) with unlabeled out-of-distribution data in X \ D (Chen et al., 2020; Devlin et al., 2019; Hu et al., 2020; Mikolov et al., 2013b; Peters et al., 2018). This brings a new perspective of how representations help extrapolation in various application areas. For example, in natural language processing, pretrained representations (Mikolov et al., 2013a; Wu & Dredze, 2019) and feature transformation using domain knowledge (Yuan et al., 2020; Zhang et al., 2019) help models generalize across languages, a special type of extrapolation. In quantitative finance, identifying the right “factors” or features is crucial for deep learning models as the financial markets may frequently be in extrapolation regimes (Banz, 1981; Fama & French, 1993; Ross, 1976).
Linear algorithmic alignment explains successful extrapolation in the literature and suggests that extrapolation is harder in general: encoding appropriate non-linearity often requires domain expertise or model search. Next, we provide theoretical and empirical support for our hypothesis.
4.2 THEORETICAL AND EMPIRICAL SUPPORT
We validate our hypothesis on three DP tasks: max degree, shortest path, and n-body problem, and prove the hypothesis for max degree. We highlight the role of graph structures in extrapolation.
Theoretical analysis. We start with a simple yet fundamental task: learning the max degree of a graph, a special case of DP with one iteration. As a corollary of Theorem 1, the commonly used sum-based GNN (Eqn. 1) cannot extrapolate well (proof in Appendix B.4).
Corollary 1. GNNs with sum-aggregation and sum-readout do not extrapolate well in Max Degree.
To achieve linear algorithmic alignment, we can encode the only non-linearity, the max function, in the readout. Theorem 3 confirms that a GNN with max-readout can extrapolate well in this task. Theorem 3. (Extrapolation with GNNs). Assume all nodes have the same feature. Let g and g′ be the max/min degree function, respectively. Let {(Gi, g(Gi)}ni=1 be the training set. If {(g(Gi), g′(Gi), g(Gi) · Nmaxi , g′(Gi) · Nmini )}ni=1 spans R4, where Nmaxi and Nmini are the number of nodes that have max/min degree on Gi, then one-layer max-readout GNNs trained on {(Gi, g(Gi))}ni=1 with squared loss in the NTK regime learn g.
Theorem 3 does not follow immediately from Theorem 2, because MLP modules in GNNs only receive indirect supervision. We analyze the Graph NTK (Du et al., 2019b) to prove Theorem 3 in Appendix B.5. While Theorem 3 assumes identical node features, we empirically observe similar results for both identical and non-identical features (Figure 16 in Appendix).
Interpretation of conditions. The condition in Theorem 3 is analogous to that in Theorem 2. Both theorems require diverse training data, measured by graph structure in Theorem 3 or directions in Theorem 2. In Theorem 3, the condition is violated if all training graphs have the same max or min node degrees, e.g., when training data are from one of the following families: path, C-regular graphs (regular graphs with degree C), cycle, and ladder.
Experiments: architectures that help extrapolation. We validate our theoretical analysis with two DP tasks: max degree and shortest path (details in Appendix C.5 and C.6). While previous works only test on graphs with different sizes (Battaglia et al., 2018; Velickovic et al., 2020), we also test on graphs with unseen structure, edge weights and node features. The results support our theory. For max degree, GNNs with max-readout are better than GNNs with sum-readout (Figure 6a), confirming Corollary 1 and Theorem 3. For shortest path, GNNs with min-readout and min-aggregation are better than GNNs with sum-readout (Figure 6a).
Experiments confirm the importance of training graphs structure (Figure 7). Interestingly, the two tasks favor different graph structure. For max degree, as Theorem 3 predicts, GNNs extrapolate well when trained on trees, complete graphs, expanders, and general graphs, and extrapolation errors are
higher when trained on 4-regular, cycles, or ladder graphs. For shortest path, extrapolation errors follow a U-shaped curve as we change the sparsity of training graphs (Figure 7b and Figure 18 in Appendix). Intuitively, models trained on sparse or dense graphs likely learn degenerative solutions.
Experiments: representations that help extrapolation. Finally, we show a good input representation helps extrapolation. We study the n-body problem (Battaglia et al., 2016; Watters et al., 2017) (Appendix C.7), that is, predicting the time evolution of n objects in a gravitational system. Following previous work, the input is a complete graph where the nodes are the objects (Battaglia et al., 2016). The node feature for u is the concatenation of the object’s mass mu, position x (t) u , and velocity v (t) u at time t. The edge features are set to zero. We train GNNs to predict the velocity of each object u at time t+ 1. The true velocity f(G;u) for object u is approximately
f(G;u) ≈ vtu + atu · dt, atu = C · ∑ v 6=u mv ‖xtu − xtv‖32 · ( xtv − xtu ) , (5)
where C is a constant. To learn f , the MLP modules need to learn a nonlinear function. Therefore, GNNs do not extrapolate well to unseen masses or distances (“original features” in Figure 6b). We instead use an improved representation h(G) to encode non-linearity. At time t, we transform the edge features of (u, v) from zero to w(t)(u,v) = mv · ( x (t) v − x(t)u ) /‖x(t)u − x(t)v ‖32. The new edge features do not add information, but the MLP modules now only need to learn linear functions, which helps extrapolation (“improved features” in Figure 6b).
5 CONNECTIONS TO OTHER OUT-OF-DISTRIBUTION SETTINGS
We discuss several related settings. Intuitively, from the viewpoint of our results above, methods in related settings may improve extrapolation by 1) learning useful non-linearities beyond the training data range and 2) mapping relevant test data to the training data range.
Domain adaptation studies generalization to a specific target domain (Ben-David et al., 2010; Blitzer et al., 2008; Mansour et al., 2009). Typical strategies adjust the training process: for instance, use unlabeled samples from the target domain to align the target and source distributions (Ganin et al., 2016; Zhao et al., 2018). Using target domain data during training may induce useful non-linearities and may mitigate extrapolation by matching the target and source distributions, though the correctness of the learned mapping depends on the label distribution (Zhao et al., 2019).
Self-supervised learning on a large amount of unlabeled data can learn useful non-linearities beyond the labeled training data range (Chen et al., 2020; Devlin et al., 2019; He et al., 2020; Peters et al., 2018). Hence, our results suggest an explanation why pre-trained representations such as BERT improve out-of-distribution robustness (Hendrycks et al., 2020). In addition, self-supervised learning could map semantically similar data to similar representations, so some out-of-domain examples might fall inside the training distribution after the mapping.
Invariant models aim to learn features that respect specific invariances across multiple training distributions (Arjovsky et al., 2019; Rojas-Carulla et al., 2018; Zhou et al., 2021). If the model indeed learns these invariances, which can happen in the linear case and when there are confounders or anti-causal variables (Ahuja et al., 2021; Rosenfeld et al., 2021), this may essentially increase the training data range, since variations in the invariant features may be ignored by the model.
Distributional robustness considers small adversarial perturbations of the data distribution, and ensures that the model performs well under these (Goh & Sim, 2010; Sagawa et al., 2020; Sinha et al., 2018; Staib & Jegelka, 2019). We instead look at more global perturbations. Still, one would expect that modifications that help extrapolation in general also improve robustness to local perturbations.
6 CONCLUSION
This paper is an initial step towards formally understanding how neural networks trained by gradient descent extrapolate. We identify conditions under which MLPs and GNNs extrapolate as desired. We also suggest an explanation how GNNs have been able to extrapolate well in complex algorithmic tasks: encoding appropriate non-linearity in architecture and features can help extrapolation. Our results and hypothesis agree with empirical results, in this paper and in the literature.
ACKNOWLEDGMENTS
We thank Ruosong Wang, Tianle Cai, Han Zhao, Yuichi Yoshida, Takuya Konishi, Toru Lin, Weihua Hu, Matt J. Staib, Yichao Zhou, Denny Wu, Tianyi Yang, and Dingli (Leo) Yu for insightful discussions. This research was supported by NSF CAREER award 1553284, NSF III 1900933, and a Chevron-MIT Energy Fellowship. This research was also supported by JST ERATO JPMJER1201 and JSPS Kakenhi JP18H05291. MZ was supported by ODNI, IARPA, via the BETTER Program contract 2019-19051600005. The views, opinions, and/or findings contained in this article are those of the author and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the Department of Defense, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
A THEORETICAL BACKGROUND
In this section, we introduce theoretical background on neural tangent kernel (NTK), which draws an equivalence between the training dynamics of infinitely-wide (or ultra-wide) neural networks and that of kernel regression with respect to the neural tangent kernel.
Consider a general neural network f(θ,x) : X → R where θ ∈ Rm is the parameters in the network and x ∈ X is the input. Suppose we train the neural network by minimizing the squared loss over training data, `(θ) = 12 ∑n i=1(f(θ,xi)−yi)2, by gradient descent with infinitesimally small learning rate, i.e., dθ(t)dt = −∇`(θ(t)). Let u(t) = (f(θ(t),xi)) n i=1 be the network outputs. u(t) follows the dynamics
du(t)
dt = −H(t)(u(t)− y), (6)
whereH(t) is an n× n matrix whose (i, j)-th entry is
H(t)ij =
〈 ∂f(θ(t),xi)
∂θ , ∂f(θ(t),xj) ∂θ
〉 . (7)
A line of works show that for sufficiently wide networks,H(t) stays almost constant during training, i.e.,H(t) = H(0) in the limit (Arora et al., 2019a;b; Allen-Zhu et al., 2019a; Du et al., 2019c;a; Li & Liang, 2018; Jacot et al., 2018). Suppose network parameters are randomly initialized with certain scaling, as network width goes to infinity,H(0) converges to a fixed matrix, the neural tangent kernel (NTK) (Jacot et al., 2018):
NTK(x,x′) = E θ∼W
〈 ∂f(θ(t),x)
∂θ , ∂f(θ(t),x′) ∂θ
〉 , (8)
whereW is Gaussian. Therefore, the learning dynamics of sufficiently wide neural networks in this regime is equivalent to that of kernel gradient descent with respect to the NTK. This implies the function learned by a neural network at convergence on any specific training set, denoted by fNTK(x), can be precisely characterized, and is equivalent to the following kernel regression solution
fNTK(x) = (NTK(x,x1), ...,NTK(x,xn)) · NTK−1trainY , (9) where NTKtrain is the n × n kernel for training data, NTK(x,xi) is the kernel value between test data x and training data xi, and Y is the training labels.
We can in fact exactly calculate the neural tangent kernel matrix for certain architectures and activation functions. The exact formula of NTK with ReLU activation has been derived for feedforward neural networks (Jacot et al., 2018), convolutional neural networks (Arora et al., 2019b), and Graph Neural Networks (Du et al., 2019b).
Our theory builds upon this equivalence of network learning and kernel regression to more precisely characterize the function learned by a sufficiently-wide neural network given any specific training set. In particular, the difference between the learned function and true function over the domain of X determines the extrapolation error.
However, in general it is non-trivial to compute or analyze the functional form of what a neural network learns using Eqn. 9, because the kernel regression solution using neural tangent kernel only gives point-wise evaluation. Thus, we instead analyze the function learned by a network in the NTK’s induced feature space, because representations in the feature space would give a functional form.
Lemma 2 makes this connection more precise: the solution to the kernel regression using neural tangent kernel, which also equals over-parameterized network learning, is equivalent to a min-norm solution among functions in the NTK’s induced feature space that fits all training data. Here the min-norm refers to the RKHS norm. Lemma 2. Let φ(x) be a feature map induced by a neural tangent kernel, for any x ∈ Rd. The solution to kernel regression Eqn. 9 is equivalent to fNTK(x) = φ(x)>βNTK, where βNTK is
min β ‖β‖2
s.t. φ(xi)>β = yi, for i = 1, ..., n.
We prove Lemma 2 in Appendix B.6. To analyze the learned functions as the min-norm solution in feature space, we also need the explicit formula of an induced feature map of the corresponding neural tangent kernel. The following lemma gives a NTK feature space for two-layer MLPs with ReLU activation. It follows easily from the kernel formula described in Jacot et al. (2018); Arora et al. (2019b); Bietti & Mairal (2019). Lemma 3. An infinite-dimensional feature map φ(x) induced by the neural tangent kernel of a two-layer multi-layer perceptron with ReLU activation function is
φ (x) = c ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) , (10)
where w(k) ∼ N (0, I), with k going to infinity. c is a constant, and I is the indicator function.
We prove Lemma 3 in Appendix B.7. The feature maps for other architectures, e.g., Graph Neural Networks (GNNs) can be derived similarly. We analyze the Graph Neural Tangent Kernel (GNTK) for a simple GNN architecture in Theorem 3.
We then use Lemma 2 and 3 to characterize the properties of functions learned by an overparameterized neural network. We precisely characterize the neural networks’ learned functions in the NTK regime via solving the constrained optimization problem corresponding to the min-norm function in NTK feature space with the constraint of fitting the training data.
However, there still remains many technical challenges. For example, provable extrapolation (exact or asymptotic) is often not achieved with most training data distribution. Understanding the desirable condition requires significant insights into the geometry properties of training data distribution, and how they interact with the solution learned by neural networks. Our insights and refined analysis shows in Rd space, we need to consider the directions of training data. In graphs, we need to consider, in addition, the graph structure of training data. We refer readers to detailed proofs for the intuition of data conditions. Moreover, since NTK corresponds to infinitely wide neural networks, the feature space is of infinite dimension. The analysis of infinite dimensional spaces poses non-trivial technical challenges too.
Since different theorems have their respective challenges and insights/techniques, we refer the interested readers to the respective proofs for details. In Lemma 1 (proof in Appendix B.2), Theorem 2 (proof in Appendix B.3), and Theorem 1 (proof in Appendix B.1) we analyze over-parameterized MLPs. The proof of Corollary 1 is in Appendix B.4. In Theorem 3 we analyze Graph Neural Networks (proof in Appendix B.5).
B PROOFS
B.1 PROOF OF THEOREM 1
To show neural network outputs f(x) converge to a linear function along all directions v, we will analyze the function learned by a neural network on the training set {(xi, yi)}ni=1, by studying the functional representation in the network’s neural tangent kernel RKHS space.
Recall from Section A that in the NTK regime, i.e., networks are infinitely wide, randomly initialized, and trained by gradient descent with infinitesimally small learning rate, the learning dynamics of the neural network is equivalent to that of a kernel regression with respect to its neural tangent kernel.
For any x ∈ Rd, the network output is given by f(x) = (〈 φ(x), φ(x1) 〉 , ..., 〈 φ(x), φ(xn) 〉) · NTK−1trainY ,
where NTKtrain is the n× n kernel for training data, 〈 φ(x), φ(xi) 〉 is the kernel value between test data x and training data xi, and Y is training labels. By Lemma 2, the kernel regression solution is also equivalent to the min-norm solution in the NTK RKHS space that fits all training data
f(x) = φ(x)>βNTK, (11) where the representation coefficient βNTK is
min β ‖β‖2
s.t. φ(xi)>β = yi, for i = 1, ..., n.
The feature map φ(x) for a two-layer MLP with ReLU activation is given by Lemma 3 φ (x) = c′ ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) , (12)
where w(k) ∼ N (0, I), with k going to infinity. c′ is a constant, and I is the indicator function. Without loss of generality, we assume the bias term to be 1. For simplicity of notations, we denote each data x plus bias term by, i.e., x̂ = [x|1] (Bietti & Mairal, 2019), and assume constant term is 1. Given any direction v on the unit sphere, the network outputs for out-of-distribution data x0 = tv and x = x0 + hv = (1 + λ)x0, where we introduce the notation of x and λ for convenience, are given by Eqn. 11 and Eqn. 12
f(x̂0) =β > NTK ( x̂0 · I ( w(k) > x̂0 ≥ 0 ) ,w(k) > x̂0 · I ( w(k) > x̂0 ≥ 0 ) , ... ) ,
f(x̂) =β>NTK
( x̂ · I ( w(k) > x̂ ≥ 0 ) ,w(k) > x̂ · I ( w(k) > x̂ ≥ 0 ) , ... ) ,
where we have x̂0 = [x0|1] and x̂ = [(1 + λ)x0|1]. It follows that f(x̂)− f(x̂0) = β>NTK ( x̂ · I ( w(k) > x̂ ≥ 0 ) − x̂0 · I ( w(k) > x̂0 ≥ 0 ) , (13)
w(k) > x̂ · I ( w(k) > x̂ ≥ 0 ) −w(k) > x̂0 · I ( w(k) > x̂0 ≥ 0 ) , ... ) (14)
By re-arranging the terms, we get the following equivalent form of the entries: x̂ · I ( w>x̂ ≥ 0 ) − x̂0 · I ( w>x̂0 ≥ 0 ) (15)
= x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) + I ( w>x̂0 ≥ 0 )) − x̂0 · I ( w>x̂0 ≥ 0 ) (16)
= x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) + (x̂− x̂0) · I ( w>x̂0 ≥ 0 ) (17)
= [x|1] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) + [hv|0] · I ( w>x̂0 ≥ 0 ) (18)
Similarly, we have w>x̂ · I ( w>x̂ ≥ 0 ) −w>x̂0 · I ( w>x̂0 ≥ 0 ) (19)
= w>x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) + I ( w>x̂0 ≥ 0 )) −w>x̂0 · I ( w>x̂0 ≥ 0 ) (20)
= w>x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) +w> (x̂− x̂0) · I ( w>x̂0 ≥ 0 ) (21)
= w> [x|1] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) +w>[hv|0] · I ( w>x̂0 ≥ 0 ) (22)
Again, let us denote the part of βNTK corresponding to each w by βw. Moreover, let us denote the part corresponding to Eqn. 18 by β1w and the part corresponding to Eqn. 22 by β 2 w. Then we have
f(x̂)− f(x̂0) h
(23)
= ∫ β1 > w [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (24)
+ ∫ β1 > w [v|0] · I ( w>x̂0 ≥ 0 ) dP(w) (25)
+ ∫ β2w ·w> [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (26)
+ ∫ β2w ·w>[v|0] · I ( w>x̂0 ≥ 0 ) dP(w) (27)
Note that all βw are finite constants that depend on the training data. Next, we show that as t→∞, each of the terms above converges in O(1/ ) to some constant coefficient βv that depend on the training data and the direction v. Let us first consider Eqn. 25. We have∫
I ( w>x̂0 ≥ 0 ) dP(w) = ∫ I ( w>[x0|1] ≥ 0 ) dP(w) (28)
= ∫ I ( w>[x0/t|1/t] ≥ 0 ) dP(w) (29)
−→ ∫ I ( w>[v|0] ≥ 0 ) dP(w) as t→∞ (30)
Because β1w are finite constants, it follows that∫ β1 > w [v|0] · I ( w>x̂0 ≥ 0 ) dP(w)→ ∫ β1 > w [v|0] · I ( w>[v|0] ≥ 0 ) dP(w), (31)
where the right hand side is a constant that depends on training data and direction v. Next, we show the convergence rate for Eqn. 31. Given error > 0, because β1 >
w [v|0] are finite constants, we need to bound the following by C · for some constant C,
| ∫ I ( w>x̂0 ≥ 0 ) − I ( w>[v|0] ≥ 0 ) dP(w)| (32)
= | ∫ I ( w>[x0|1] ≥ 0 ) − I ( w>[x0|0] ≥ 0 ) dP(w)| (33)
Observe that the two terms in Eqn. 33 represent the volume of half-(balls) that are orthogonal to vectors [x0|1] and [x0|0]. Hence, Eqn. 33 is the volume of the non-overlapping part of the two (half)balls, which is created by rotating an angle θ along the last coordinate. By symmetry, Eqn. 33 is linear in θ. Moreover, the angle θ = arctan(C/t) for some constant C. Hence, it follows that
| ∫ I ( w>[x0|1] ≥ 0 ) − I ( w>[x0|0] ≥ 0 ) dP(w)| = C1 · arctan(C2/t) (34)
≤ C1 · C2/t (35) = O(1/t) (36)
In the last inequality, we used the fact that arctanx < x for x > 0. Hence, O(1/t) < implies t = O(1/ ) as desired. Next, we consider Eqn. 24.∫
β1 > w [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (37)
Let us first analyze the convergence of the following: | ∫ I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) dP(w)| (38)
= | ∫ I ( w>[(1 + λ)x0|1] ≥ 0 ) − I ( w>[x0|1] ≥ 0 ) dP(w)dP(w)| (39)
= | ∫ I ( w>[x0| 1
1 + λ ] ≥ 0
) − I ( w>[x0|1] ≥ 0 ) dP(w)dP(w)| → 0 (40)
The convergence to 0 follows from Eqn. 34. Now we consider the convergence rate. The angle θ is at most 1− 11+λ times of that in Eqn. 34. Hence, the rate is as follows(
1− 1 1 + λ
) ·O ( 1
t
) = λ 1 + λ ·O ( 1 t ) = h/t 1 + h/t ·O ( 1 t ) = O ( h (h+ t)t ) (41)
Now we get back to Eqn. 24, which simplifies as the following.∫ β1 >
w
[ v + tv
h | 1 h
] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (42)
We compare the rate of growth of left hand side and the rate of decrease of right hand side (indicators).
t h · h (h+ t)t = 1 h+ t → 0 as t→∞ (43)
1 h · h (h+ t)t =
1
(h+ t)t → 0 as t→∞ (44)
Hence, the indicators decrease faster, and it follows that Eqn. 24 converges to 0 with rate O( 1 ). Moreover, we can bound w with standard concentration techniques. Then the proofs for Eqn. 26 and Eqn. 27 follow similarly. This completes the proof.
B.2 PROOF OF LEMMA 1
Overview of proof. To prove exact extrapolation given the conditions on training data, we analyze the function learned by the neural network in a functional form. The network’s learned function can be precisely characterized by a solution in the network’s neural tangent kernel feature space which has a minimum RKHS norm among functions that can fit all training data, i.e., it corresponds to the optimum of a constrained optimization problem. We show that the global optimum of this constrained optimization problem, given the conditions on training data, is precisely the same function as the underlying true function.
Setup and preparation. LetX = {x1, ...,xn} and Y = {y1, ..., yn} denote the training set input features and their labels. Let βg ∈ Rd denote the true parameters/weights for the underlying linear function g, i.e.,
g(x) = β>g x for all x ∈ Rd
Recall from Section A that in the NTK regime, where networks are infinitely wide, randomly initialized, and trained by gradient descent with infinitesimally small learning rate, the learning dynamics of a neural network is equivalent to that of a kernel regression with respect to its neural tangent kernel. Moreover, Lemma 2 tells us that this kernel regression solution can be expressed in the functional form in the neural tangent kernel’s feature space. That is, the function learned by the neural network (in the ntk regime) can be precisely characterized as
f(x) = φ(x)>βNTK,
where the representation coefficient βNTK is
min β ‖β‖2 (45)
s.t. φ(xi)>β = yi, for i = 1, ..., n. (46)
An infinite-dimensional feature map φ(x) for a two-layer ReLU network is described in Lemma 3 φ (x) = c′ ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) ,
where w(k) ∼ N (0, I), with k going to infinity. c′ is a constant, and I is the indicator function. That is, there are infinitely many directionsw with Gaussian density, and each direction comes with two features. Without loss of generality, we can assume the scaling constant to be 1.
Constrained optimization in NTK feature space. The representation or weight of the neural network’s learned function in the neural tangent kernel feature space, βNTK, consists of weight vectors for each x · I ( w(k) > x ≥ 0 ) ∈ Rd and w(k)>x · I ( w(k) > x ≥ 0 ) ∈ R. For simplicity
of notation, we will use w to refer to a particular w, without considering the index (k), which does not matter for our purposes. For any w ∈ Rd, we denote by β̂w = (β̂(1)w , ..., β̂(d)w ) ∈ Rd the weight vectors corresponding to x · I ( w>x ≥ 0 ) , and denote by β̂′w ∈ Rd the weight for
w>x · I ( w>x ≥ 0 ) .
Observe that for any w ∼ N (0, I) ∈ Rd, any other vectors in the same direction will activate the same set of xi ∈ Rd. That is, if w>xi ≥ 0 for any w ∈ Rd, then (k ·w)>xi ≥ 0 for any k > 0. Hence, we can reload our notation to combine the effect of weights for w’s in the same direction. This enables simpler notations and allows us to change the distribution of w in NTK features from Gaussian distribution to uniform distribution on the unit sphere.
More precisely, we reload our notation by using βw and β′w to denote the combined effect of all weights (β̂(1)kw, ..., β̂ (d) kw) ∈ Rd and β̂′kw ∈ R for all kw with k > 0 in the same direction of w. That is, for each w ∼ Uni(unit sphere) ∈ Rd, we define β(j)w as the total effect of weights in the same direction
β(j)w = ∫ β̂(j)u I ( w>u
‖w‖ · ‖u‖ = 1
) dP(u), for j = [d] (47)
where u ∼ N (0, I). Note that to ensure the βw is a well-defined number, here we can work with the polar representation and integrate with respect to an angle. Then βw is well-defined. But for simplicity of exposition, we use the plain notation of integral. Similarly, we define β′w as reloading the notation of
β′w = ∫ β̂uI ( w>u
‖w‖ · ‖u‖ = 1 ) · ‖u‖ ‖w‖ dP(u) (48)
Here, in Eqn. 48 we have an extra term of ‖u‖‖w‖ compared to Eqn. 47 because the NTK features that Eqn. 48 corresponds to,w>x · I ( w>x ≥ 0 ) , has an extraw> term. So we need to take into account the scaling. This abstraction enables us to make claims on the high-level parameters βw and β′w only, which we will show to be sufficient to determine the learned function.
Then we can formulate the constrained optimization problem whose solution gives a functional form of the neural network’s learned function. We rewrite the min-norm solution in Eqn. 45 as
min β
∫ ( β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (49)
s.t. ∫
w>xi≥0
β>wxi + β ′ w ·w>xi dP(w) = β>g xi ∀i ∈ [n], (50)
where the density of w is now uniform on the unit sphere of Rd. Observe that since w is from a uniform distribution, the probability density function P(w) is a constant. This means every xi is activated by half of thew on the unit sphere, which implies we can now write the right hand side of Eqn. 50 in the form of left hand side, i.e., integral form. This allows us to further simplify Eqn. 50 as∫
w>xi≥0
( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n], (51)
where Eqn. 51 follows from the following steps of simplification∫ w>xi≥0 β(1)w x (1) i + ..β (d) w x (d) i + β ′ w ·w>xidP(w) = β(1)g x (1) i + ...β (d) g x (d) i ∀i ∈ [n],
⇐⇒ ∫
w>xi≥0
β(1)w x (1) i + ...+ β (d) w x (d) i + β ′ w ·w>xi dP(w)
= 1∫
w>xi≥0 dP(w)
· ∫
w>xi≥0
dP(w) · ( β(1)g x (1) i + ...+ β (d) g x (d) i ) ∀i ∈ [n],
⇐⇒ ∫
w>xi≥0
β(1)w x (1) i + ...+ β (d) w x (d) i + β ′ w ·w>xidP(w)
= 2 · ∫
w>xi≥0
β(1)g x (1) i + ...+ β (d) g x (d) i dP(w) ∀i ∈ [n],
⇐⇒ ∫
w>xi≥0
( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n].
Claim 1. Without loss of generality, assume the scaling factor c in NTK feature map φ(x) is 1. Then the global optimum to the constraint optimization problem Eqn. 49 subject to Eqn. 51, i.e.,
min β
∫ ( β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (52)
s.t. ∫
w>xi≥0
( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n]. (53)
satisfies βw + β′w ·w = 2βg for all w.
This claim implies the exact extrapolation we want to prove, i.e., fNTK(x) = g(x). This is because, if our claim holds, then for any x ∈ Rd
fNTK(x) = ∫ w>x≥0 β>wx+ β ′ w ·w>x dP(w)
= ∫ w>x≥0 2 · β>g x dP(w)
= ∫ w>x≥0 dP(w) · 2β>g x
= 1
2 · 2β>g x = g(x)
Thus, it remains to prove Claim 1. To compute the optimum to the constrained optimization problem Eqn. 52, we consider the Lagrange multipliers. It is clear that the objective Eqn. 52 is convex. Moreover, the constraint Eqn. 53 is affine. Hence, by KKT, solution that satisfies the Lagrange condition will be the global optimum. We compute the Lagrange multiplier as
L(β, λ) = ∫ (
β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (54)
− n∑ i=1 λi · ∫ w>xi≥0 ( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) (55) Setting the partial derivative of L(β, λ) with respect to each variable to zero gives
∂L ∂β (k) w = 2β(k)w P(w) + n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) = 0 (56)
∂L β′w = 2β′wP(w) + n∑ i=1 λi ·w>xi · I ( w>xi ≥ 0 ) = 0 (57) ∂L ∂λi = ∫ w>xi≥0 ( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 (58)
It is clear that the solution in Claim 1 immediately satisfies Eqn. 58. Hence, it remains to show there exist a set of λi for i ∈ [n] that satisfies Eqn. 56 and Eqn. 57. We can simplify Eqn. 56 as
β(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) , (59)
where c is a constant. Similarly, we can simplify Eqn. 57 as
β′w = c · n∑ i=1 λi ·w>xi · I ( w>xi ≥ 0 ) (60)
Observe that combining Eqn. 59 and Eqn. 60 implies that the constraint Eqn. 60 can be further simplified as
β′w = w >βw (61)
It remains to show that given the condition on training data, there exists a set of λi so that Eqn. 59 and Eqn. 61 are satisfied.
Global optimum via the geometry of training data. Recall that we assume our training data {(xi, yi)}ni=1 satisfies for any w ∈ Rd, there exist d linearly independent {xwi }di=1 ⊂ X , where X = {xi}ni=1, so that w>xwi ≥ 0 and −xwi ∈X for i = 1..d, e.g., an orthogonal basis of Rd and their opposite vectors. We will show that under this data regime, we have
(a) for any particular w, there indeed exist a set of λi that can satisfy the constraints Eqn. 59 and Eqn. 61 for this particular w.
(b) For any w1 and w2 that activate the exact same set of {xi}, the same set of λi can satisfy the constraints Eqn. 59 and Eqn. 61 of both w1 and w2.
(c) Whenever we rotate a w1 to a w2 so that the set of xi being activated changed, we can still find λi that satisfy constraint of both w1 and w2.
Combining (a), (b) and (c) implies there exists a set of λ that satisfy the constraints for all w. Hence, it remains to show these three claims.
We first prove Claim (a). For each w, we must find a set of λi so that the following hold.
β(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) , β′w = w >βw βw + β ′ w ·w = 2βg
Here, βg and w are fixed, and w is a vector on the unit sphere. It is easy to see that βw is then determined by βg and w, and there indeed exists a solution (solving a consistent linear system). Hence we are left with a linear system with d linear equations
β(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) ∀k ∈ [d]
to solve with free variables being λi so that w activates xi, i.e., w>xi ≥ 0. Because the training data {(xi, yi)}ni=1 satisfies for any w, there exist at least d linearly independent xi that activate w. This guarantees for any w we must have at least d free variables. It follows that there must exist solutions λi to the linear system. This proves Claim (a).
Next, we show that (b) for anyw1 andw2 that activate the exact same set of {xi}, the same set of λi can satisfy the constraints Eqn. 59 and Eqn. 61 of bothw1 andw2. Becausew1 andw2 are activated by the same set of xi, this implies
βw1 = c · n∑ i=1 λi · xi · I ( w>1 xi ≥ 0 ) = c · n∑ i=1 λi · xi · I ( w>2 xi ≥ 0 ) = βw2
Since λi already satisfy constraint Eqn. 59 for w1, they also satisfy that for w2. Thus, it remains to show that βw1 + β ′ w1 · w1 = βw2 + β ′ w2 · w1 assuming βw1 = βw2 , β ′ w1 = w > 1 βw1 , and β′w2 = w > 2 βw2 . This indeed holds because
βw1 + β ′ w1 ·w1 = βw2 + β ′ w2 ·w2
⇐⇒ β′w1 ·w > 1 = β ′ w2 ·w > 2 ⇐⇒ w>1 βw1w>1 = w>2 βw2w>2 ⇐⇒ w>1 w1β>w1 = w > 2 w2β > w2 ⇐⇒ 1 · β>w1 = 1 · β > w2
⇐⇒ βw1 = βw1 Here, we used the fact that w1 and w2 are vectors on the unit sphere. This proves Claim (b).
Finally, we show (c) that Whenever we rotate a w1 to a w2 so that the set of xi being activated changed, we can still find λi that satisfy constraint of both w1 and w2. Suppose we rotate w1 to w2 so that w2 lost activation with x1,x2, ...,xp which in the set of linearly independent xi’s being activated byw1 and their opposite vectors −xi are also in the training set (without loss of generality). Then w2 must now also get activated by −x1,−x2, ...,−xp. This is because if w>2 xi < 0, we must have w>2 (−xi) > 0. Recall that in the proof of Claim (a), we only needed the λi from linearly independent xi that we used to solve the linear systems, and their opposite as the free variables to solve the linear system of
d equations. Hence, we can set λ to 0 for the other xi while still satisfying the linear system. Then, suppose there exists λi that satisfy
β(k)w1 = c · d∑ i=1 λi · x(k)i
where the xi are the linearly independent vectors that activatew1 with opposite vectors in the training set, which we have proved in (a). Then we can satisfy the constraint for βw2 below
β(k)w2 = c · p∑ i=1 λ̂i · (−xi)(k) + d∑ i=p+1 λi · x(k)i
by setting λ̂i = −λi for i = 1...p. Indeed, this gives
β(k)w2 = c · p∑ i=1 (−λi) · (−xi)(k) + d∑ i=p+1 λi · x(k)i
= c · d∑ i=1 λi · x(k)i
Thus, we can also find λi that satisfy the constraint for βw2 . Here, we do not consider the case where w2 is parallel with an xi because such w2 has measure zero. Note that we can apply this argument iteratively because the flipping the sign always works and will not create any inconsistency.
Moreover, we can show that the constraint for β′w2 is satisfied by a similar argument as in proof of Claim (b). This follows from the fact that our construction makes βw1 = βw2 . Then we can follow the same argument as in (b) to show that βw1 + β ′ w1 ·w1 = βw2 + β ′ w2 ·w1. This completes the proof of Claim (c).
In summary, combining Claim (a), (b) and (c) gives that Claim 1 holds. That is, given our training data, the global optimum to the constrained optimization problem of finding the min-norm solution among functions that fit the training data satisfies βw+β′w ·w = 2βg . We also showed that this claim implies exact extrapolation, i.e., the network’s learned function f(x) is equal to the true underlying function g(x) for all x ∈ Rd. This completes the proof.
B.3 PROOF OF THEOREM 2
Proof of the asymptotic convergence to extrapolation builds upon our proof of exact extrapolation, i.e., Lemma 1. The proof idea is that if the training data distribution has support at all directions, when the number of samples n→∞, asymptotically the training set will converge to some imaginary training set that satisfies the condition for exact extrapolation. Since if training data are close the neural tangent kernels are also close, the predictions or learned function will converge to a function that achieves perfect extrapolation, that is, the true underlying function.
Asymptotic convergence of data sets. We first show the training data converge to a data set that satisfies the exact extrapolation condition in Lemma 1. Suppose training data {xi}ni=1 are sampled from a distribution whose support contains a connected set S that intersects all directions, i.e., for any non-zero w ∈ Rd, there exists k > 0 so that kw ∈ S. Let us denote by S the set of datasets that satisfy the condition in Lemma 1. In fact, we will use a relaxed condition in the proof of Lemma 1 (Lemma 1 in the main text uses a stricter condition for simplicity of exposition). Given a general dataset X and a dataset S ∈ S of the same size n, let σ(X,S) denote a matching of their data points, i.e., σ outputs a sequence of pairs
σ(X,S)i = (xi, si) for i ∈ [n] s.t. X = {xi}ni=1
S = {si}ni=1
Let ` : Rd × Rd → R be the l2 distance that takes in a pair of points. We then define the distance between the datasets d(X,S) as the minimum sum of l2 distances of their data points over all
possible matching.
d(X,S) = minσ n∑ i=1 ` (σ (X,S)i) |X| = |S| = n
∞ |X| 6= |S|
We can then define a “closest distance to perfect dataset” function D∗ : X → R which maps a dataset X to the minimum distance ofX to any dataset in S
D∗ (X) = min S∈S d (X,S)
It is easy to see that for any datasetX = {xi}ni=1, D∗ (X) can be bounded by the minimum of the closest distance to perfect dataset D∗ of sub-datasets ofX of size 2d.
D∗ ({xi}ni=1) ≤ bn/2dc min k=1 D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) (62)
This is because for any S ∈ S, and any S ⊆ S′, we must have S′ ∈ S because a dataset satisfies exact extrapolation condition as long as it contains some key points. Thus, adding more data will not hurt, i.e., for anyX1 ⊆X2, we always have
D∗ (X1) ≤ D∗ (X2)
Now let us denote by Xn a random dataset of size n where each xi ∈ Xn is sampled from the training distribution. Recall that our training data {xi}ni=1 are sampled from a distribution whose support contains a connected set S∗ that intersects all directions, i.e., for any non-zerow ∈ Rd, there exists k > 0 so that kw ∈ S∗. It follows that for a random dataset X2d of size 2d, the probability that D∗(X2d) > happens is less than 1 for any > 0. First there must exist S0 = {si}2di=1 ∈ S of size 2d, e.g., orthogonal basis and their opposite vectors. Observe that if we scale any si by k > 0, the resulting dataset is still in S by the definition of S . We denote the set of datasets where we are allowed to scale elements of S0 by S0. It follows that
P (D∗(X2d) > ) = P (
min S∈S d (X2d,S) > ) ≤ P ( min S∈S0 d (X2d,S) >
) = P ( min S∈S0 min σ n∑ i=1 ` (σ (X2d,S)i) > )
= 1− P ( min S∈S0 min σ n∑ i=1 ` (σ (X2d,S)i) ≤ )
≤ 1− P (
min S∈S0 min σ n max i=1
` (σ (X2d,S)i) ≤ )
≤ δ < 1
where we denote the bound of P (D∗(X2d) > ) by δ < 1, and the last step follows from P (
min S∈S0 min σ n max i=1
` (σ (X2d,S)i) ≤ ) > 0
which further follows from the fact that for any si ∈ S0, by the assumption on training distribution, we can always find k > 0 so that ksi ∈ S∗, a connected set in the support of training distribution. By the connectivity of support S∗, ksi cannot be an isolated point in S∗, so for any > 0, we must have∫
‖x−ksi‖≤ ,x∈S∗
fX(x)dx > 0
Hence, we can now apply Eqn. 62 to bound D∗(Xn). Given any > 0, we have
P (D∗(Xn) > ) = 1− P (D∗(Xn) ≤ ) ≤ 1− P (bn/2dc
min k=1
D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) ≤ )
≤ 1− 1− bn/2dc∏ k=1 P ( D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) > )
= bn/2dc∏ k=1 P ( D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) > ) ≤ δbn/2dc
Here δ < 1. This implies D∗(Xn) p−→ 0, i.e.,
lim n→∞
P (D∗(Xn) > ) = 0 ∀ > 0 (63)
Eqn. 63 says as the number of training samples n→∞, our training set will converge in probability to a dataset that satisfies the requirement for exact extrapolation.
Asymptotic convergence of predictions. Let NTK(x,x′) : Rd × Rd → R denote the neural tangent kernel for a two-layer ReLU MLP. It is easy to see that if x → x∗, then NTK(x, ·) → NTK(x∗, ·) (Arora et al. (2019b)). Let NTKtrain denote the n× n kernel matrix for training data. We have shown that our training set converges to a perfect data set that satisfies conditions of exact extrapolation. Moreover, note that our training set will only have a finite number of (not increase with n) xi that are not precisely the same as those in a perfect dataset. This is because a perfect data only contains a finite number of key points and the other points can be replaced by any other points while still being a perfect data set. Thus, we have NTKtrain → N∗, where N∗ is the n× n NTK matrix for some perfect data set.
Because neural tangent kernel is positive definite, we have NTK−1train → N∗ −1
. Recall that for any x ∈ Rd, the prediction of NTK is
fNTK(x) = (NTK(x | 1. What are the main contributions and key findings of the paper regarding MLPs and GNNs?
2. How do the authors study the extrapolation power of neural networks, and what conditions do they identify for good extrapolation?
3. What are some concerns or suggestions for improvements in the paper, particularly regarding the definition of "diversity" and the title? | Review | Review
This paper investigates the extrapolation power of MLPs and GNNs (trained by gradient descent with mean squared loss) from a theoretical perspective. The authors show results of extensive experiments that back up their theoretical findings.
In particular, the authors study the question of what these neural networks learn outside the training distribution, and identify conditions when they extrapolate well. Their findings suggest that ReLU MLPs extrapolate well in linear tasks, with a fast convergence rate (O(1/\epsilon). GNNs (having MLP modules) extrapolate well when the non-linear operations are encoded in either network architecture or data representation, so as the inner MLP modules are aligned with only linear functions.
The paper is well written, ideas and definitions clearly explained and experiments laid out in detail. The theoretical contributions of the work are important as they enhance our understanding of how these networks learn and how well they generalize. These findings help us design GNNs based on the data and problem at hand. As such, this work addresses a fundamental question in GNN understanding and must be published.
Some comments/questions to the authors:
In Section 3.2, “diversity” of a distribution is informally defined in terms of training support and direction. A more thorough definition would be helpful.
The title of the paper is somewhat misleading: “from feedforward to GNN” insinuates that there are other network types that are discussed in the paper. |
ICLR | Title
How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks
Abstract
We study how neural networks trained by gradient descent extrapolate, i.e., what they learn outside the support of the training distribution. Previous works report mixed empirical results when extrapolating with neural networks: while feedforward neural networks, a.k.a. multilayer perceptrons (MLPs), do not extrapolate well in certain simple tasks, Graph Neural Networks (GNNs) – structured networks with MLP modules – have shown some success in more complex tasks. Working towards a theoretical explanation, we identify conditions under which MLPs and GNNs extrapolate well. First, we quantify the observation that ReLU MLPs quickly converge to linear functions along any direction from the origin, which implies that ReLU MLPs do not extrapolate most nonlinear functions. But, they can provably learn a linear target function when the training distribution is sufficiently “diverse”. Second, in connection to analyzing the successes and limitations of GNNs, these results suggest a hypothesis for which we provide theoretical and empirical evidence: the success of GNNs in extrapolating algorithmic tasks to new data (e.g., larger graphs or edge weights) relies on encoding task-specific non-linearities in the architecture or features. Our theoretical analysis builds on a connection of over-parameterized networks to the neural tangent kernel. Empirically, our theory holds across different training settings.
1 INTRODUCTION
Humans extrapolate well in many tasks. For example, we can apply arithmetics to arbitrarily large numbers. One may wonder whether a neural network can do the same and generalize to examples arbitrarily far from the training data (Lake et al., 2017). Curiously, previous works report mixed extrapolation results with neural networks. Early works demonstrate feedforward neural networks, a.k.a. multilayer perceptrons (MLPs), fail to extrapolate well when learning simple polynomial functions (Barnard & Wessels, 1992; Haley & Soloway, 1992). However, recent works show Graph Neural Networks (GNNs) (Scarselli et al., 2009), a class of structured networks with MLP building blocks, can generalize to graphs much larger than training graphs in challenging algorithmic tasks, such as predicting the time evolution of physical systems (Battaglia et al., 2016), learning graph algorithms (Velickovic et al., 2020), and solving mathematical equations (Lample & Charton, 2020).
To explain this puzzle, we formally study how neural networks trained by gradient descent (GD) extrapolate, i.e., what they learn outside the support of training distribution. We say a neural network extrapolates well if it learns a task outside the training distribution. At first glance, it may seem that neural networks can behave arbitrarily outside the training distribution since they have high capacity (Zhang et al., 2017) and are universal approximators (Cybenko, 1989; Funahashi, 1989; Hornik et al., 1989; Kurkova, 1992). However, neural networks are constrained by gradient descent training (Hardt et al., 2016; Soudry et al., 2018). In our analysis, we explicitly consider such implicit bias through the analogy of the training dynamics of over-parameterized neural networks and kernel regression via the neural tangent kernel (NTK) (Jacot et al., 2018).
Starting with feedforward networks, the simplest neural networks and building blocks of more complex architectures such as GNNs, we establish that the predictions of over-parameterized MLPs with ReLU activation trained by GD converge to linear functions along any direction from the origin. We prove a convergence rate for two-layer networks and empirically observe that convergence often occurs close to the training data (Figure 1), which suggests ReLU MLPs cannot extrapolate well for most nonlinear tasks. We emphasize that our results do not follow from the fact that ReLU networks have finitely many linear regions (Arora et al., 2018; Hanin & Rolnick, 2019; Hein et al., 2019). While having finitely many linear regions implies ReLU MLPs eventually become linear, it does not say whether MLPs will learn the correct target function close to the training distribution. In contrast, our results are non-asymptotic and quantify what kind of functions MLPs will learn close to the training distribution. Second, we identify a condition when MLPs extrapolate well: the task is linear and the geometry of the training distribution is sufficiently “diverse”. To our knowledge, our results are the first extrapolation results of this kind for feedforward neural networks.
We then relate our insights into feedforward neural networks to GNNs, to explain why GNNs extrapolate well in some algorithmic tasks. Prior works report successful extrapolation for tasks that can be solved by dynamic programming (DP) (Bellman, 1966), which has a computation structure aligned with GNNs (Xu et al., 2020). DP updates can often be decomposed into nonlinear and linear steps. Hence, we hypothesize that GNNs trained by GD can extrapolate well in a DP task, if we encode appropriate non-linearities in the architecture and input representation (Figure 2). Importantly, encoding non-linearities may be unnecessary for GNNs to interpolate, because the MLP modules can easily learn many nonlinear functions inside the training distribution (Cybenko, 1989; Hornik et al., 1989; Xu et al., 2020), but it is crucial for GNNs to extrapolate correctly. We prove this hypothesis for a simplified case using Graph NTK (Du et al., 2019b). Empirically, we validate the hypothesis on three DP tasks: max degree, shortest paths, and n-body problem. We show GNNs with appropriate architecture, input representation, and training distribution can predict well on graphs with unseen sizes, structures, edge weights, and node features. Our theory explains the empirical success in previous works and suggests their limitations: successful extrapolation relies on encoding task-specific non-linearities, which requires domain knowledge or extensive model search. From a broader standpoint, our insights go beyond GNNs and apply broadly to other neural networks.
To summarize, we study how neural networks extrapolate. First, ReLU MLPs trained by GD converge to linear functions along directions from the origin with a rate of O(1/t). Second, to explain why GNNs extrapolate well in some algorithmic tasks, we prove that ReLU MLPs can extrapolate well in linear tasks, leading to a hypothesis: a neural network can extrapolate well when appropriate nonlinearities are encoded into the architecture and features. We prove this hypothesis for a simplified case and provide empirical support for more general settings.
1.1 RELATED WORK
Early works show example tasks where MLPs do not extrapolate well, e.g. learning simple polynomials (Barnard & Wessels, 1992; Haley & Soloway, 1992). We instead show a general pattern of how ReLU MLPs extrapolate and identify conditions for MLPs to extrapolate well. More recent works study the implicit biases induced on MLPs by gradient descent, for both the NTK and mean field regimes (Bietti & Mairal, 2019; Chizat & Bach, 2018; Song et al., 2018). Related to our results, some works show MLP predictions converge to “simple” piecewise linear functions, e.g., with few linear regions (Hanin & Rolnick, 2019; Maennel et al., 2018; Savarese et al., 2019; Williams et al., 2019). Our work differs in that none of these works explicitly studies extrapolation, and some focus only on one-dimensional inputs. Recent works also show that in high-dimensional settings of the NTK regime, MLP is asymptotically at most a linear predictor in certain scaling limits (Ba et al., 2020; Ghorbani et al., 2019). We study a different setting (extrapolation), and our analysis is non-asymptotic in nature and does not rely on random matrix theory.
Prior works explore GNN extrapolation by testing on larger graphs (Battaglia et al., 2018; Santoro et al., 2018; Saxton et al., 2019; Velickovic et al., 2020). We are the first to theoretically study GNN extrapolation, and we complete the notion of extrapolation to include unseen features and structures.
2 PRELIMINARIES
We begin by introducing our setting. Let X be the domain of interest, e.g., vectors or graphs. The task is to learn an underlying function g : X → R with a training set {(xi, yi)}ni=1 ⊂ D, where yi = g(xi) and D is the support of training distribution. Previous works have extensively studied in-distribution generalization where the training and the test distributions are identical (Valiant, 1984; Vapnik, 2013); i.e., D = X . In contrast, extrapolation addresses predictions on a domain X that is larger than the support of the training distribution D. We will say a model extrapolates well if it has a small extrapolation error.
Definition 1. (Extrapolation error). Let f : X → R be a model trained on {(xi, yi)}ni=1 ⊂ D with underlying function g : X → R. Let P be a distribution over X \ D and let ` : R× R→ R be a loss function. We define the extrapolation error of f as Ex∼P [`(f(x), g(x))].
We focus on neural networks trained by gradient descent (GD) or its variants with squared loss. We study two network architectures: feedforward and graph neural networks.
Graph Neural Networks. GNNs are structured networks operating on graphs with MLP modules (Battaglia et al., 2018; Xu et al., 2019). Let G = (V,E) be a graph. Each node u ∈ V has a feature vector xu, and each edge (u, v) ∈ E has a feature vector w(u,v). GNNs recursively compute node representations h(k)u at iteration k (Gilmer et al., 2017; Xu et al., 2018). Initially, h (0) u = xu. For k = 1..K, GNNs update h(k)u by aggregating the neighbor representations. We can optionally compute a graph representation hG by aggregating the final node representations. That is,
h(k)u = ∑
v∈N (u)
MLP(k) ( h(k−1)u ,h (k−1) v ,w(v,u) ) , hG = MLP(K+1) (∑ u∈G h(K)u ) . (1)
The final output is the graph representation hG or final node representations h (K) u depending on the task. We refer to the neighbor aggregation step for h(k)u as aggregation and the pooling step in hG as readout. Previous works typically use sum-aggregation and sum-readout (Battaglia et al., 2018). Our results indicate why replacing them may help extrapolation (Section 4).
3 HOW FEEDFORWARD NEURAL NETWORKS EXTRAPOLATE
Feedforward networks are the simplest neural networks and building blocks of more complex architectures such as GNNs, so we first study how they extrapolate when trained by GD. Throughout the paper, we assume ReLU activation. Section 3.3 contains preliminary results for other activations.
3.1 LINEAR EXTRAPOLATION BEHAVIOR OF RELU MLPS
By architecture, ReLU networks learn piecewise linear functions, but what do these regions precisely look like outside the support of the training data? Figure 1 illustrates examples of how ReLU MLPs extrapolate when trained by GD on various nonlinear functions. These examples suggest that outside the training support, the predictions quickly become linear along directions from the origin. We systematically verify this pattern by linear regression on MLPs’ predictions: the coefficient of determination (R2) is always greater than 0.99 (Appendix C.2). That is, ReLU MLPs “linearize” almost immediately outside the training data range.
We formalize this observation using the implicit biases of neural networks trained by GD via the neural tangent kernel (NTK): optimization trajectories of over-parameterized networks trained by GD are equivalent to those of kernel regression with a specific neural tangent kernel, under a set of assumptions called the “NTK regime” (Jacot et al., 2018). We provide an informal definition here; for further details, we refer the readers to Jacot et al. (2018) and Appendix A.
Definition 2. (Informal) A neural network trained in the NTK regime is infinitely wide, randomly initialized with certain scaling, and trained by GD with infinitesimal steps.
Prior works analyze optimization and in-distribution generalization of over-parameterized neural networks via NTK (Allen-Zhu et al., 2019a;b; Arora et al., 2019a;b; Cao & Gu, 2019; Du et al., 2019c;a; Li & Liang, 2018; Nitanda & Suzuki, 2021). We instead analyze extrapolation.
Theorem 1 formalizes our observation from Figure 1: outside the training data range, along any direction tv from the origin, the prediction of a two-layer ReLU MLP quickly converges to a linear
function with rate O( 1t ). The linear coefficients βv and the constant terms in the convergence rate depend on the training data and direction v. The proof is in Appendix B.1. Theorem 1. (Linear extrapolation). Suppose we train a two-layer ReLU MLP f : Rd → R with squared loss in the NTK regime. For any direction v ∈ Rd, let x0 = tv. As t → ∞, f(x0 + hv)− f(x0)→ βv · h for any h > 0, where βv is a constant linear coefficient. Moreover, given > 0, for t = O( 1 ), we have |
f(x0+hv)−f(x0) h − βv| < .
ReLU networks have finitely many linear regions (Arora et al., 2018; Hanin & Rolnick, 2019), hence their predictions eventually become linear. In contrast, Theorem 1 is a more fine-grained analysis of how MLPs extrapolate and provides a convergence rate. While Theorem 1 assumes two-layer networks in the NTK regime, experiments confirm that the linear extrapolation behavior happens across networks with different depths, widths, learning rates, and batch sizes (Appendix C.1 and C.2). Our proof technique potentially also extends to deeper networks.
Theorem 1 implies which target functions a ReLU MLP may be able to match outside the training data: only functions that are almost-linear along the directions away from the origin. Indeed, Figure 4a shows ReLU MLPs do not extrapolate target functions such as x>Ax (quadratic), ∑d i=1 cos(2π ·x(i))
(cos), and ∑d i=1 √ x(i) (sqrt), where x(i) is the i-th dimension of x. With suitable hyperparameters, MLPs extrapolate the L1 norm correctly, which satisfies the directional linearity condition.
Figure 4a provides one more positive result: MLPs extrapolate linear target functions well, across many different hyperparameters. While learning linear functions may seem very limited at first, in Section 4 this insight will help explain extrapolation properties of GNNs in non-linear practical tasks. Before that, we first theoretically analyze when MLPs extrapolate well.
3.2 WHEN RELU MLPS PROVABLY EXTRAPOLATE WELL
Figure 4a shows that MLPs can extrapolate well when the target function is linear. However, this is not always true. In this section, we show that successful extrapolation depends on the geometry of training data. Intuitively, the training distribution must be “diverse” enough for correct extrapolation.
We provide two conditions that relate the geometry of the training data to extrapolation. Lemma 1 states that over-parameterized MLPs can learn a linear target function with only 2d examples. Lemma 1. Let g(x) = β>x be the target function for β ∈ Rd. Suppose {xi}ni=1 contains an orthogonal basis {x̂i}di=1 and {−x̂i}di=1. If we train a two-layer ReLU MLP f on {(xi, yi)}ni=1 with squared loss in the NTK regime, then f(x) = β>x for all x ∈ Rd.
Lemma 1 is mainly of theoretical interest, as the 2d examples need to be carefully chosen. Theorem 2 builds on Lemma 1 and identifies a more practical condition for successful extrapolation: if the support of the training distribution covers all directions (e.g., a hypercube that covers the origin), the MLP converges to a linear target function with sufficient training data. Theorem 2. (Conditions for extrapolation). Let g(x) = β>x be the target function for β ∈ Rd. Suppose {xi}ni=1 is sampled from a distribution whose support D contains a connected subset S, where for any non-zero w ∈ Rd, there exists k > 0 so that kw ∈ S. If we train a two-layer ReLU MLP f : Rd → R on {(xi, yi)}ni=1 with squared loss in the NTK regime, f(x) p−→ β>x as n→∞.
Experiments: geometry of training data affects extrapolation. The condition in Theorem 2 formalizes the intuition that the training distribution must be “diverse” for successful extrapolation, e.g., D includes all directions. Empirically, the extrapolation error is indeed small when the condition of Theorem 2 is satisfied (“all” in Figure 4b). In contrast, the extrapolation error is much larger when the training examples are restricted to only some directions (Figure 4b and Figure 3).
Relating to previous works, Theorem 2 suggests why spurious correlations may hurt extrapolation, complementing the causality arguments (Arjovsky et al., 2019; Peters et al., 2016; Rojas-Carulla et al., 2018). When the training data has spurious correlations, some combinations of features are missing; e.g., camels might only appear in deserts in an image collection. Therefore, the condition for Theorem 2 no longer holds, and the model may extrapolate incorrectly. Theorem 2 is also analogous to an identifiability condition for linear models, but stricter. We can uniquely identify a linear function if the training data has full (feature) rank. MLPs are more expressive, so identifying the linear target function requires additional constraints.
To summarize, we analyze how ReLU MLPs extrapolate and provide two insights: (1) MLPs cannot extrapolate most nonlinear tasks due to their linear extrapolation (Theorem 1); and (2) MLPs extrapolate well when the target function is linear, if the training distribution is “diverse” (Theorem 2). In the next section, these results help us understand how more complex networks extrapolate.
3.3 MLPS WITH OTHER ACTIVATION FUNCTIONS
Before moving on to GNNs, we complete the picture of MLPs with experiments on other activation functions: tanh σ(x) = tanh(x), cosine σ(x) = cos(x) (Lapedes & Farber, 1987; McCaughan, 1997; Sopena & Alquezar, 1994), and quadratic σ(x) = x2 (Du & Lee, 2018; Livni et al., 2014). Details are in Appendix C.4. MLPs extrapolate well when the activation and target function are similar; e.g., tanh activation extrapolates well when learning tanh, but not other functions (Figure 5). Moreover, each activation function has different limitations. To extrapolate the tanh function with tanh activation, the training data range has to be sufficiently wide. When learning a quadratic function with quadratic activation, only two-layer networks extrapolate well as more layers lead to higher-order polynomials. Cosine activations are hard to optimize for high-dimensional data, so we only consider one/two dimensional cosine target functions.
4 HOW GRAPH NEURAL NETWORKS EXTRAPOLATE
Above, we saw that extrapolation in nonlinear tasks is hard for MLPs. Despite this limitation, GNNs have been shown to extrapolate well in some nonlinear algorithmic tasks, such as intuitive physics (Battaglia et al., 2016; Janner et al., 2019), graph algorithms (Battaglia et al., 2018; Velickovic et al., 2020), and symbolic mathematics (Lample & Charton, 2020). To address this discrepancy, we build on our MLP results and study how GNNs trained by GD extrapolate.
4.1 HYPOTHESIS: LINEAR ALGORITHMIC ALIGNMENT HELPS EXTRAPOLATION
We start with an example: training GNNs to solve the shortest path problem. For this task, prior works observe that a modified GNN architecture with min-aggregation can generalize to graphs larger than those in the training set (Battaglia et al., 2018; Velickovic et al., 2020):
h(k)u = min v∈N (u)
MLP(k) ( h(k−1)u ,h (k−1) v ,w(v,u) ) . (2)
We first provide an intuitive explanation (Figure 2a). Shortest path can be solved by the Bellman-Ford (BF) algorithm (Bellman, 1958) with the following update:
d[k][u] = min v∈N (u)
d[k − 1][v] +w(v, u), (3)
where w(v, u) is the weight of edge (v, u), and d[k][u] is the shortest distance to node u within k steps. The two equations can be easily aligned: GNNs simulate the BF algorithm if its MLP modules learn a linear function d[k−1][v]+w(v, u). Since MLPs can extrapolate linear tasks, this “alignment” may explain why min-aggregation GNNs can extrapolate well in this task.
For comparison, we can reason why we would not expect GNNs with the more commonly used sum-aggregation (Eqn. 1) to extrapolate well in this task. With sum-aggregation, the MLP modules need to learn a nonlinear function to simulate the BF algorithm, but Theorem 1 suggests that they will not extrapolate most nonlinear functions outside the training support.
We can generalize the above intuition to other algorithmic tasks. Many tasks where GNNs extrapolate well can be solved by dynamic programming (DP) (Bellman, 1966), an algorithmic paradigm with a recursive structure similar to GNNs’ (Eqn. 1) (Xu et al., 2020).
Definition 3. Dynamic programming (DP) is a recursive procedure with updates
Answer[k][s] = DP-Update({Answer[k − 1][s′]} , s′ = 1...n), (4)
where Answer[k][s] is the solution to a sub-problem indexed by iteration k and state s, and DP-Update is a task-specific update function that solves the sub-problem based on the previous iteration.
From a broader standpoint, we hypothesize that: if we encode appropriate non-linearities into the model architecture and input representations so that the MLP modules only need to learn nearly linear steps, then the resulting neural network can extrapolate well.
Hypothesis 1. (Linear algorithmic alignment). Let f : X → R be the underlying function and N a neural network with m MLP modules. Suppose there exist m linear functions {gi}mi=1 so that by replacing N ’s MLP modules with gi’s, N simulates f . Given > 0, there exists {(xi, f(xi))}ni=1 ⊂ D ( X so that N trained on {(xi, f(xi))}ni=1 by GD with squared loss learns f̂ with ‖f̂ − f‖ < .
Our hypothesis builds on the algorithmic alignment framework of (Xu et al., 2020), which states that a neural network interpolates well if the modules are “aligned” to easy-to-learn (possibly nonlinear) functions. Successful extrapolation is harder: the modules need to align with linear functions.
Applications of linear algorithmic alignment. In general, linear algorithmic alignment is not restricted to GNNs and applies broadly to neural networks. To satisfy the condition, we can encode appropriate nonlinear operations in the architecture or input representation (Figure 2). Learning DP algorithms with GNNs is one example of encoding non-linearity in the architecture (Battaglia et al., 2018; Corso et al., 2020). Another example is to encode log-and-exp transforms in the architecture to help extrapolate multiplication in arithmetic tasks (Trask et al., 2018; Madsen & Johansen, 2020). Neural symbolic programs take a step further and encode a library of symbolic operations to help extrapolation (Johnson et al., 2017; Mao et al., 2019; Yi et al., 2018).
For some tasks, it may be easier to change the input representation (Figure 2b). Sometimes, we can decompose the target function f as f = g ◦ h into a feature embedding h and a “simpler” target function g that our model can extrapolate well. We can obtain h via specialized features or feature transforms using domain knowledge (Lample & Charton, 2020; Webb et al., 2020), or via representation learning (e.g., BERT) with unlabeled out-of-distribution data in X \ D (Chen et al., 2020; Devlin et al., 2019; Hu et al., 2020; Mikolov et al., 2013b; Peters et al., 2018). This brings a new perspective of how representations help extrapolation in various application areas. For example, in natural language processing, pretrained representations (Mikolov et al., 2013a; Wu & Dredze, 2019) and feature transformation using domain knowledge (Yuan et al., 2020; Zhang et al., 2019) help models generalize across languages, a special type of extrapolation. In quantitative finance, identifying the right “factors” or features is crucial for deep learning models as the financial markets may frequently be in extrapolation regimes (Banz, 1981; Fama & French, 1993; Ross, 1976).
Linear algorithmic alignment explains successful extrapolation in the literature and suggests that extrapolation is harder in general: encoding appropriate non-linearity often requires domain expertise or model search. Next, we provide theoretical and empirical support for our hypothesis.
4.2 THEORETICAL AND EMPIRICAL SUPPORT
We validate our hypothesis on three DP tasks: max degree, shortest path, and n-body problem, and prove the hypothesis for max degree. We highlight the role of graph structures in extrapolation.
Theoretical analysis. We start with a simple yet fundamental task: learning the max degree of a graph, a special case of DP with one iteration. As a corollary of Theorem 1, the commonly used sum-based GNN (Eqn. 1) cannot extrapolate well (proof in Appendix B.4).
Corollary 1. GNNs with sum-aggregation and sum-readout do not extrapolate well in Max Degree.
To achieve linear algorithmic alignment, we can encode the only non-linearity, the max function, in the readout. Theorem 3 confirms that a GNN with max-readout can extrapolate well in this task. Theorem 3. (Extrapolation with GNNs). Assume all nodes have the same feature. Let g and g′ be the max/min degree function, respectively. Let {(Gi, g(Gi)}ni=1 be the training set. If {(g(Gi), g′(Gi), g(Gi) · Nmaxi , g′(Gi) · Nmini )}ni=1 spans R4, where Nmaxi and Nmini are the number of nodes that have max/min degree on Gi, then one-layer max-readout GNNs trained on {(Gi, g(Gi))}ni=1 with squared loss in the NTK regime learn g.
Theorem 3 does not follow immediately from Theorem 2, because MLP modules in GNNs only receive indirect supervision. We analyze the Graph NTK (Du et al., 2019b) to prove Theorem 3 in Appendix B.5. While Theorem 3 assumes identical node features, we empirically observe similar results for both identical and non-identical features (Figure 16 in Appendix).
Interpretation of conditions. The condition in Theorem 3 is analogous to that in Theorem 2. Both theorems require diverse training data, measured by graph structure in Theorem 3 or directions in Theorem 2. In Theorem 3, the condition is violated if all training graphs have the same max or min node degrees, e.g., when training data are from one of the following families: path, C-regular graphs (regular graphs with degree C), cycle, and ladder.
Experiments: architectures that help extrapolation. We validate our theoretical analysis with two DP tasks: max degree and shortest path (details in Appendix C.5 and C.6). While previous works only test on graphs with different sizes (Battaglia et al., 2018; Velickovic et al., 2020), we also test on graphs with unseen structure, edge weights and node features. The results support our theory. For max degree, GNNs with max-readout are better than GNNs with sum-readout (Figure 6a), confirming Corollary 1 and Theorem 3. For shortest path, GNNs with min-readout and min-aggregation are better than GNNs with sum-readout (Figure 6a).
Experiments confirm the importance of training graphs structure (Figure 7). Interestingly, the two tasks favor different graph structure. For max degree, as Theorem 3 predicts, GNNs extrapolate well when trained on trees, complete graphs, expanders, and general graphs, and extrapolation errors are
higher when trained on 4-regular, cycles, or ladder graphs. For shortest path, extrapolation errors follow a U-shaped curve as we change the sparsity of training graphs (Figure 7b and Figure 18 in Appendix). Intuitively, models trained on sparse or dense graphs likely learn degenerative solutions.
Experiments: representations that help extrapolation. Finally, we show a good input representation helps extrapolation. We study the n-body problem (Battaglia et al., 2016; Watters et al., 2017) (Appendix C.7), that is, predicting the time evolution of n objects in a gravitational system. Following previous work, the input is a complete graph where the nodes are the objects (Battaglia et al., 2016). The node feature for u is the concatenation of the object’s mass mu, position x (t) u , and velocity v (t) u at time t. The edge features are set to zero. We train GNNs to predict the velocity of each object u at time t+ 1. The true velocity f(G;u) for object u is approximately
f(G;u) ≈ vtu + atu · dt, atu = C · ∑ v 6=u mv ‖xtu − xtv‖32 · ( xtv − xtu ) , (5)
where C is a constant. To learn f , the MLP modules need to learn a nonlinear function. Therefore, GNNs do not extrapolate well to unseen masses or distances (“original features” in Figure 6b). We instead use an improved representation h(G) to encode non-linearity. At time t, we transform the edge features of (u, v) from zero to w(t)(u,v) = mv · ( x (t) v − x(t)u ) /‖x(t)u − x(t)v ‖32. The new edge features do not add information, but the MLP modules now only need to learn linear functions, which helps extrapolation (“improved features” in Figure 6b).
5 CONNECTIONS TO OTHER OUT-OF-DISTRIBUTION SETTINGS
We discuss several related settings. Intuitively, from the viewpoint of our results above, methods in related settings may improve extrapolation by 1) learning useful non-linearities beyond the training data range and 2) mapping relevant test data to the training data range.
Domain adaptation studies generalization to a specific target domain (Ben-David et al., 2010; Blitzer et al., 2008; Mansour et al., 2009). Typical strategies adjust the training process: for instance, use unlabeled samples from the target domain to align the target and source distributions (Ganin et al., 2016; Zhao et al., 2018). Using target domain data during training may induce useful non-linearities and may mitigate extrapolation by matching the target and source distributions, though the correctness of the learned mapping depends on the label distribution (Zhao et al., 2019).
Self-supervised learning on a large amount of unlabeled data can learn useful non-linearities beyond the labeled training data range (Chen et al., 2020; Devlin et al., 2019; He et al., 2020; Peters et al., 2018). Hence, our results suggest an explanation why pre-trained representations such as BERT improve out-of-distribution robustness (Hendrycks et al., 2020). In addition, self-supervised learning could map semantically similar data to similar representations, so some out-of-domain examples might fall inside the training distribution after the mapping.
Invariant models aim to learn features that respect specific invariances across multiple training distributions (Arjovsky et al., 2019; Rojas-Carulla et al., 2018; Zhou et al., 2021). If the model indeed learns these invariances, which can happen in the linear case and when there are confounders or anti-causal variables (Ahuja et al., 2021; Rosenfeld et al., 2021), this may essentially increase the training data range, since variations in the invariant features may be ignored by the model.
Distributional robustness considers small adversarial perturbations of the data distribution, and ensures that the model performs well under these (Goh & Sim, 2010; Sagawa et al., 2020; Sinha et al., 2018; Staib & Jegelka, 2019). We instead look at more global perturbations. Still, one would expect that modifications that help extrapolation in general also improve robustness to local perturbations.
6 CONCLUSION
This paper is an initial step towards formally understanding how neural networks trained by gradient descent extrapolate. We identify conditions under which MLPs and GNNs extrapolate as desired. We also suggest an explanation how GNNs have been able to extrapolate well in complex algorithmic tasks: encoding appropriate non-linearity in architecture and features can help extrapolation. Our results and hypothesis agree with empirical results, in this paper and in the literature.
ACKNOWLEDGMENTS
We thank Ruosong Wang, Tianle Cai, Han Zhao, Yuichi Yoshida, Takuya Konishi, Toru Lin, Weihua Hu, Matt J. Staib, Yichao Zhou, Denny Wu, Tianyi Yang, and Dingli (Leo) Yu for insightful discussions. This research was supported by NSF CAREER award 1553284, NSF III 1900933, and a Chevron-MIT Energy Fellowship. This research was also supported by JST ERATO JPMJER1201 and JSPS Kakenhi JP18H05291. MZ was supported by ODNI, IARPA, via the BETTER Program contract 2019-19051600005. The views, opinions, and/or findings contained in this article are those of the author and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the Department of Defense, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
A THEORETICAL BACKGROUND
In this section, we introduce theoretical background on neural tangent kernel (NTK), which draws an equivalence between the training dynamics of infinitely-wide (or ultra-wide) neural networks and that of kernel regression with respect to the neural tangent kernel.
Consider a general neural network f(θ,x) : X → R where θ ∈ Rm is the parameters in the network and x ∈ X is the input. Suppose we train the neural network by minimizing the squared loss over training data, `(θ) = 12 ∑n i=1(f(θ,xi)−yi)2, by gradient descent with infinitesimally small learning rate, i.e., dθ(t)dt = −∇`(θ(t)). Let u(t) = (f(θ(t),xi)) n i=1 be the network outputs. u(t) follows the dynamics
du(t)
dt = −H(t)(u(t)− y), (6)
whereH(t) is an n× n matrix whose (i, j)-th entry is
H(t)ij =
〈 ∂f(θ(t),xi)
∂θ , ∂f(θ(t),xj) ∂θ
〉 . (7)
A line of works show that for sufficiently wide networks,H(t) stays almost constant during training, i.e.,H(t) = H(0) in the limit (Arora et al., 2019a;b; Allen-Zhu et al., 2019a; Du et al., 2019c;a; Li & Liang, 2018; Jacot et al., 2018). Suppose network parameters are randomly initialized with certain scaling, as network width goes to infinity,H(0) converges to a fixed matrix, the neural tangent kernel (NTK) (Jacot et al., 2018):
NTK(x,x′) = E θ∼W
〈 ∂f(θ(t),x)
∂θ , ∂f(θ(t),x′) ∂θ
〉 , (8)
whereW is Gaussian. Therefore, the learning dynamics of sufficiently wide neural networks in this regime is equivalent to that of kernel gradient descent with respect to the NTK. This implies the function learned by a neural network at convergence on any specific training set, denoted by fNTK(x), can be precisely characterized, and is equivalent to the following kernel regression solution
fNTK(x) = (NTK(x,x1), ...,NTK(x,xn)) · NTK−1trainY , (9) where NTKtrain is the n × n kernel for training data, NTK(x,xi) is the kernel value between test data x and training data xi, and Y is the training labels.
We can in fact exactly calculate the neural tangent kernel matrix for certain architectures and activation functions. The exact formula of NTK with ReLU activation has been derived for feedforward neural networks (Jacot et al., 2018), convolutional neural networks (Arora et al., 2019b), and Graph Neural Networks (Du et al., 2019b).
Our theory builds upon this equivalence of network learning and kernel regression to more precisely characterize the function learned by a sufficiently-wide neural network given any specific training set. In particular, the difference between the learned function and true function over the domain of X determines the extrapolation error.
However, in general it is non-trivial to compute or analyze the functional form of what a neural network learns using Eqn. 9, because the kernel regression solution using neural tangent kernel only gives point-wise evaluation. Thus, we instead analyze the function learned by a network in the NTK’s induced feature space, because representations in the feature space would give a functional form.
Lemma 2 makes this connection more precise: the solution to the kernel regression using neural tangent kernel, which also equals over-parameterized network learning, is equivalent to a min-norm solution among functions in the NTK’s induced feature space that fits all training data. Here the min-norm refers to the RKHS norm. Lemma 2. Let φ(x) be a feature map induced by a neural tangent kernel, for any x ∈ Rd. The solution to kernel regression Eqn. 9 is equivalent to fNTK(x) = φ(x)>βNTK, where βNTK is
min β ‖β‖2
s.t. φ(xi)>β = yi, for i = 1, ..., n.
We prove Lemma 2 in Appendix B.6. To analyze the learned functions as the min-norm solution in feature space, we also need the explicit formula of an induced feature map of the corresponding neural tangent kernel. The following lemma gives a NTK feature space for two-layer MLPs with ReLU activation. It follows easily from the kernel formula described in Jacot et al. (2018); Arora et al. (2019b); Bietti & Mairal (2019). Lemma 3. An infinite-dimensional feature map φ(x) induced by the neural tangent kernel of a two-layer multi-layer perceptron with ReLU activation function is
φ (x) = c ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) , (10)
where w(k) ∼ N (0, I), with k going to infinity. c is a constant, and I is the indicator function.
We prove Lemma 3 in Appendix B.7. The feature maps for other architectures, e.g., Graph Neural Networks (GNNs) can be derived similarly. We analyze the Graph Neural Tangent Kernel (GNTK) for a simple GNN architecture in Theorem 3.
We then use Lemma 2 and 3 to characterize the properties of functions learned by an overparameterized neural network. We precisely characterize the neural networks’ learned functions in the NTK regime via solving the constrained optimization problem corresponding to the min-norm function in NTK feature space with the constraint of fitting the training data.
However, there still remains many technical challenges. For example, provable extrapolation (exact or asymptotic) is often not achieved with most training data distribution. Understanding the desirable condition requires significant insights into the geometry properties of training data distribution, and how they interact with the solution learned by neural networks. Our insights and refined analysis shows in Rd space, we need to consider the directions of training data. In graphs, we need to consider, in addition, the graph structure of training data. We refer readers to detailed proofs for the intuition of data conditions. Moreover, since NTK corresponds to infinitely wide neural networks, the feature space is of infinite dimension. The analysis of infinite dimensional spaces poses non-trivial technical challenges too.
Since different theorems have their respective challenges and insights/techniques, we refer the interested readers to the respective proofs for details. In Lemma 1 (proof in Appendix B.2), Theorem 2 (proof in Appendix B.3), and Theorem 1 (proof in Appendix B.1) we analyze over-parameterized MLPs. The proof of Corollary 1 is in Appendix B.4. In Theorem 3 we analyze Graph Neural Networks (proof in Appendix B.5).
B PROOFS
B.1 PROOF OF THEOREM 1
To show neural network outputs f(x) converge to a linear function along all directions v, we will analyze the function learned by a neural network on the training set {(xi, yi)}ni=1, by studying the functional representation in the network’s neural tangent kernel RKHS space.
Recall from Section A that in the NTK regime, i.e., networks are infinitely wide, randomly initialized, and trained by gradient descent with infinitesimally small learning rate, the learning dynamics of the neural network is equivalent to that of a kernel regression with respect to its neural tangent kernel.
For any x ∈ Rd, the network output is given by f(x) = (〈 φ(x), φ(x1) 〉 , ..., 〈 φ(x), φ(xn) 〉) · NTK−1trainY ,
where NTKtrain is the n× n kernel for training data, 〈 φ(x), φ(xi) 〉 is the kernel value between test data x and training data xi, and Y is training labels. By Lemma 2, the kernel regression solution is also equivalent to the min-norm solution in the NTK RKHS space that fits all training data
f(x) = φ(x)>βNTK, (11) where the representation coefficient βNTK is
min β ‖β‖2
s.t. φ(xi)>β = yi, for i = 1, ..., n.
The feature map φ(x) for a two-layer MLP with ReLU activation is given by Lemma 3 φ (x) = c′ ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) , (12)
where w(k) ∼ N (0, I), with k going to infinity. c′ is a constant, and I is the indicator function. Without loss of generality, we assume the bias term to be 1. For simplicity of notations, we denote each data x plus bias term by, i.e., x̂ = [x|1] (Bietti & Mairal, 2019), and assume constant term is 1. Given any direction v on the unit sphere, the network outputs for out-of-distribution data x0 = tv and x = x0 + hv = (1 + λ)x0, where we introduce the notation of x and λ for convenience, are given by Eqn. 11 and Eqn. 12
f(x̂0) =β > NTK ( x̂0 · I ( w(k) > x̂0 ≥ 0 ) ,w(k) > x̂0 · I ( w(k) > x̂0 ≥ 0 ) , ... ) ,
f(x̂) =β>NTK
( x̂ · I ( w(k) > x̂ ≥ 0 ) ,w(k) > x̂ · I ( w(k) > x̂ ≥ 0 ) , ... ) ,
where we have x̂0 = [x0|1] and x̂ = [(1 + λ)x0|1]. It follows that f(x̂)− f(x̂0) = β>NTK ( x̂ · I ( w(k) > x̂ ≥ 0 ) − x̂0 · I ( w(k) > x̂0 ≥ 0 ) , (13)
w(k) > x̂ · I ( w(k) > x̂ ≥ 0 ) −w(k) > x̂0 · I ( w(k) > x̂0 ≥ 0 ) , ... ) (14)
By re-arranging the terms, we get the following equivalent form of the entries: x̂ · I ( w>x̂ ≥ 0 ) − x̂0 · I ( w>x̂0 ≥ 0 ) (15)
= x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) + I ( w>x̂0 ≥ 0 )) − x̂0 · I ( w>x̂0 ≥ 0 ) (16)
= x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) + (x̂− x̂0) · I ( w>x̂0 ≥ 0 ) (17)
= [x|1] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) + [hv|0] · I ( w>x̂0 ≥ 0 ) (18)
Similarly, we have w>x̂ · I ( w>x̂ ≥ 0 ) −w>x̂0 · I ( w>x̂0 ≥ 0 ) (19)
= w>x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) + I ( w>x̂0 ≥ 0 )) −w>x̂0 · I ( w>x̂0 ≥ 0 ) (20)
= w>x̂ · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) +w> (x̂− x̂0) · I ( w>x̂0 ≥ 0 ) (21)
= w> [x|1] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) +w>[hv|0] · I ( w>x̂0 ≥ 0 ) (22)
Again, let us denote the part of βNTK corresponding to each w by βw. Moreover, let us denote the part corresponding to Eqn. 18 by β1w and the part corresponding to Eqn. 22 by β 2 w. Then we have
f(x̂)− f(x̂0) h
(23)
= ∫ β1 > w [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (24)
+ ∫ β1 > w [v|0] · I ( w>x̂0 ≥ 0 ) dP(w) (25)
+ ∫ β2w ·w> [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (26)
+ ∫ β2w ·w>[v|0] · I ( w>x̂0 ≥ 0 ) dP(w) (27)
Note that all βw are finite constants that depend on the training data. Next, we show that as t→∞, each of the terms above converges in O(1/ ) to some constant coefficient βv that depend on the training data and the direction v. Let us first consider Eqn. 25. We have∫
I ( w>x̂0 ≥ 0 ) dP(w) = ∫ I ( w>[x0|1] ≥ 0 ) dP(w) (28)
= ∫ I ( w>[x0/t|1/t] ≥ 0 ) dP(w) (29)
−→ ∫ I ( w>[v|0] ≥ 0 ) dP(w) as t→∞ (30)
Because β1w are finite constants, it follows that∫ β1 > w [v|0] · I ( w>x̂0 ≥ 0 ) dP(w)→ ∫ β1 > w [v|0] · I ( w>[v|0] ≥ 0 ) dP(w), (31)
where the right hand side is a constant that depends on training data and direction v. Next, we show the convergence rate for Eqn. 31. Given error > 0, because β1 >
w [v|0] are finite constants, we need to bound the following by C · for some constant C,
| ∫ I ( w>x̂0 ≥ 0 ) − I ( w>[v|0] ≥ 0 ) dP(w)| (32)
= | ∫ I ( w>[x0|1] ≥ 0 ) − I ( w>[x0|0] ≥ 0 ) dP(w)| (33)
Observe that the two terms in Eqn. 33 represent the volume of half-(balls) that are orthogonal to vectors [x0|1] and [x0|0]. Hence, Eqn. 33 is the volume of the non-overlapping part of the two (half)balls, which is created by rotating an angle θ along the last coordinate. By symmetry, Eqn. 33 is linear in θ. Moreover, the angle θ = arctan(C/t) for some constant C. Hence, it follows that
| ∫ I ( w>[x0|1] ≥ 0 ) − I ( w>[x0|0] ≥ 0 ) dP(w)| = C1 · arctan(C2/t) (34)
≤ C1 · C2/t (35) = O(1/t) (36)
In the last inequality, we used the fact that arctanx < x for x > 0. Hence, O(1/t) < implies t = O(1/ ) as desired. Next, we consider Eqn. 24.∫
β1 > w [x/h|1/h] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (37)
Let us first analyze the convergence of the following: | ∫ I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 ) dP(w)| (38)
= | ∫ I ( w>[(1 + λ)x0|1] ≥ 0 ) − I ( w>[x0|1] ≥ 0 ) dP(w)dP(w)| (39)
= | ∫ I ( w>[x0| 1
1 + λ ] ≥ 0
) − I ( w>[x0|1] ≥ 0 ) dP(w)dP(w)| → 0 (40)
The convergence to 0 follows from Eqn. 34. Now we consider the convergence rate. The angle θ is at most 1− 11+λ times of that in Eqn. 34. Hence, the rate is as follows(
1− 1 1 + λ
) ·O ( 1
t
) = λ 1 + λ ·O ( 1 t ) = h/t 1 + h/t ·O ( 1 t ) = O ( h (h+ t)t ) (41)
Now we get back to Eqn. 24, which simplifies as the following.∫ β1 >
w
[ v + tv
h | 1 h
] · ( I ( w>x̂ ≥ 0 ) − I ( w>x̂0 ≥ 0 )) dP(w) (42)
We compare the rate of growth of left hand side and the rate of decrease of right hand side (indicators).
t h · h (h+ t)t = 1 h+ t → 0 as t→∞ (43)
1 h · h (h+ t)t =
1
(h+ t)t → 0 as t→∞ (44)
Hence, the indicators decrease faster, and it follows that Eqn. 24 converges to 0 with rate O( 1 ). Moreover, we can bound w with standard concentration techniques. Then the proofs for Eqn. 26 and Eqn. 27 follow similarly. This completes the proof.
B.2 PROOF OF LEMMA 1
Overview of proof. To prove exact extrapolation given the conditions on training data, we analyze the function learned by the neural network in a functional form. The network’s learned function can be precisely characterized by a solution in the network’s neural tangent kernel feature space which has a minimum RKHS norm among functions that can fit all training data, i.e., it corresponds to the optimum of a constrained optimization problem. We show that the global optimum of this constrained optimization problem, given the conditions on training data, is precisely the same function as the underlying true function.
Setup and preparation. LetX = {x1, ...,xn} and Y = {y1, ..., yn} denote the training set input features and their labels. Let βg ∈ Rd denote the true parameters/weights for the underlying linear function g, i.e.,
g(x) = β>g x for all x ∈ Rd
Recall from Section A that in the NTK regime, where networks are infinitely wide, randomly initialized, and trained by gradient descent with infinitesimally small learning rate, the learning dynamics of a neural network is equivalent to that of a kernel regression with respect to its neural tangent kernel. Moreover, Lemma 2 tells us that this kernel regression solution can be expressed in the functional form in the neural tangent kernel’s feature space. That is, the function learned by the neural network (in the ntk regime) can be precisely characterized as
f(x) = φ(x)>βNTK,
where the representation coefficient βNTK is
min β ‖β‖2 (45)
s.t. φ(xi)>β = yi, for i = 1, ..., n. (46)
An infinite-dimensional feature map φ(x) for a two-layer ReLU network is described in Lemma 3 φ (x) = c′ ( x · I ( w(k) > x ≥ 0 ) ,w(k) > x · I ( w(k) > x ≥ 0 ) , ... ) ,
where w(k) ∼ N (0, I), with k going to infinity. c′ is a constant, and I is the indicator function. That is, there are infinitely many directionsw with Gaussian density, and each direction comes with two features. Without loss of generality, we can assume the scaling constant to be 1.
Constrained optimization in NTK feature space. The representation or weight of the neural network’s learned function in the neural tangent kernel feature space, βNTK, consists of weight vectors for each x · I ( w(k) > x ≥ 0 ) ∈ Rd and w(k)>x · I ( w(k) > x ≥ 0 ) ∈ R. For simplicity
of notation, we will use w to refer to a particular w, without considering the index (k), which does not matter for our purposes. For any w ∈ Rd, we denote by β̂w = (β̂(1)w , ..., β̂(d)w ) ∈ Rd the weight vectors corresponding to x · I ( w>x ≥ 0 ) , and denote by β̂′w ∈ Rd the weight for
w>x · I ( w>x ≥ 0 ) .
Observe that for any w ∼ N (0, I) ∈ Rd, any other vectors in the same direction will activate the same set of xi ∈ Rd. That is, if w>xi ≥ 0 for any w ∈ Rd, then (k ·w)>xi ≥ 0 for any k > 0. Hence, we can reload our notation to combine the effect of weights for w’s in the same direction. This enables simpler notations and allows us to change the distribution of w in NTK features from Gaussian distribution to uniform distribution on the unit sphere.
More precisely, we reload our notation by using βw and β′w to denote the combined effect of all weights (β̂(1)kw, ..., β̂ (d) kw) ∈ Rd and β̂′kw ∈ R for all kw with k > 0 in the same direction of w. That is, for each w ∼ Uni(unit sphere) ∈ Rd, we define β(j)w as the total effect of weights in the same direction
β(j)w = ∫ β̂(j)u I ( w>u
‖w‖ · ‖u‖ = 1
) dP(u), for j = [d] (47)
where u ∼ N (0, I). Note that to ensure the βw is a well-defined number, here we can work with the polar representation and integrate with respect to an angle. Then βw is well-defined. But for simplicity of exposition, we use the plain notation of integral. Similarly, we define β′w as reloading the notation of
β′w = ∫ β̂uI ( w>u
‖w‖ · ‖u‖ = 1 ) · ‖u‖ ‖w‖ dP(u) (48)
Here, in Eqn. 48 we have an extra term of ‖u‖‖w‖ compared to Eqn. 47 because the NTK features that Eqn. 48 corresponds to,w>x · I ( w>x ≥ 0 ) , has an extraw> term. So we need to take into account the scaling. This abstraction enables us to make claims on the high-level parameters βw and β′w only, which we will show to be sufficient to determine the learned function.
Then we can formulate the constrained optimization problem whose solution gives a functional form of the neural network’s learned function. We rewrite the min-norm solution in Eqn. 45 as
min β
∫ ( β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (49)
s.t. ∫
w>xi≥0
β>wxi + β ′ w ·w>xi dP(w) = β>g xi ∀i ∈ [n], (50)
where the density of w is now uniform on the unit sphere of Rd. Observe that since w is from a uniform distribution, the probability density function P(w) is a constant. This means every xi is activated by half of thew on the unit sphere, which implies we can now write the right hand side of Eqn. 50 in the form of left hand side, i.e., integral form. This allows us to further simplify Eqn. 50 as∫
w>xi≥0
( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n], (51)
where Eqn. 51 follows from the following steps of simplification∫ w>xi≥0 β(1)w x (1) i + ..β (d) w x (d) i + β ′ w ·w>xidP(w) = β(1)g x (1) i + ...β (d) g x (d) i ∀i ∈ [n],
⇐⇒ ∫
w>xi≥0
β(1)w x (1) i + ...+ β (d) w x (d) i + β ′ w ·w>xi dP(w)
= 1∫
w>xi≥0 dP(w)
· ∫
w>xi≥0
dP(w) · ( β(1)g x (1) i + ...+ β (d) g x (d) i ) ∀i ∈ [n],
⇐⇒ ∫
w>xi≥0
β(1)w x (1) i + ...+ β (d) w x (d) i + β ′ w ·w>xidP(w)
= 2 · ∫
w>xi≥0
β(1)g x (1) i + ...+ β (d) g x (d) i dP(w) ∀i ∈ [n],
⇐⇒ ∫
w>xi≥0
( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n].
Claim 1. Without loss of generality, assume the scaling factor c in NTK feature map φ(x) is 1. Then the global optimum to the constraint optimization problem Eqn. 49 subject to Eqn. 51, i.e.,
min β
∫ ( β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (52)
s.t. ∫
w>xi≥0
( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 ∀i ∈ [n]. (53)
satisfies βw + β′w ·w = 2βg for all w.
This claim implies the exact extrapolation we want to prove, i.e., fNTK(x) = g(x). This is because, if our claim holds, then for any x ∈ Rd
fNTK(x) = ∫ w>x≥0 β>wx+ β ′ w ·w>x dP(w)
= ∫ w>x≥0 2 · β>g x dP(w)
= ∫ w>x≥0 dP(w) · 2β>g x
= 1
2 · 2β>g x = g(x)
Thus, it remains to prove Claim 1. To compute the optimum to the constrained optimization problem Eqn. 52, we consider the Lagrange multipliers. It is clear that the objective Eqn. 52 is convex. Moreover, the constraint Eqn. 53 is affine. Hence, by KKT, solution that satisfies the Lagrange condition will be the global optimum. We compute the Lagrange multiplier as
L(β, λ) = ∫ (
β(1)w )2 + ( β(2)w )2 + ...+ ( β(d)w )2 + (β′w) 2 dP(w) (54)
− n∑ i=1 λi · ∫ w>xi≥0 ( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) (55) Setting the partial derivative of L(β, λ) with respect to each variable to zero gives
∂L ∂β (k) w = 2β(k)w P(w) + n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) = 0 (56)
∂L β′w = 2β′wP(w) + n∑ i=1 λi ·w>xi · I ( w>xi ≥ 0 ) = 0 (57) ∂L ∂λi = ∫ w>xi≥0 ( β>w + β ′ w ·w> − 2 · β>g ) xi dP(w) = 0 (58)
It is clear that the solution in Claim 1 immediately satisfies Eqn. 58. Hence, it remains to show there exist a set of λi for i ∈ [n] that satisfies Eqn. 56 and Eqn. 57. We can simplify Eqn. 56 as
β(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) , (59)
where c is a constant. Similarly, we can simplify Eqn. 57 as
β′w = c · n∑ i=1 λi ·w>xi · I ( w>xi ≥ 0 ) (60)
Observe that combining Eqn. 59 and Eqn. 60 implies that the constraint Eqn. 60 can be further simplified as
β′w = w >βw (61)
It remains to show that given the condition on training data, there exists a set of λi so that Eqn. 59 and Eqn. 61 are satisfied.
Global optimum via the geometry of training data. Recall that we assume our training data {(xi, yi)}ni=1 satisfies for any w ∈ Rd, there exist d linearly independent {xwi }di=1 ⊂ X , where X = {xi}ni=1, so that w>xwi ≥ 0 and −xwi ∈X for i = 1..d, e.g., an orthogonal basis of Rd and their opposite vectors. We will show that under this data regime, we have
(a) for any particular w, there indeed exist a set of λi that can satisfy the constraints Eqn. 59 and Eqn. 61 for this particular w.
(b) For any w1 and w2 that activate the exact same set of {xi}, the same set of λi can satisfy the constraints Eqn. 59 and Eqn. 61 of both w1 and w2.
(c) Whenever we rotate a w1 to a w2 so that the set of xi being activated changed, we can still find λi that satisfy constraint of both w1 and w2.
Combining (a), (b) and (c) implies there exists a set of λ that satisfy the constraints for all w. Hence, it remains to show these three claims.
We first prove Claim (a). For each w, we must find a set of λi so that the following hold.
β(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) , β′w = w >βw βw + β ′ w ·w = 2βg
Here, βg and w are fixed, and w is a vector on the unit sphere. It is easy to see that βw is then determined by βg and w, and there indeed exists a solution (solving a consistent linear system). Hence we are left with a linear system with d linear equations
β(k)w = c · n∑ i=1 λi · x(k)i · I ( w>xi ≥ 0 ) ∀k ∈ [d]
to solve with free variables being λi so that w activates xi, i.e., w>xi ≥ 0. Because the training data {(xi, yi)}ni=1 satisfies for any w, there exist at least d linearly independent xi that activate w. This guarantees for any w we must have at least d free variables. It follows that there must exist solutions λi to the linear system. This proves Claim (a).
Next, we show that (b) for anyw1 andw2 that activate the exact same set of {xi}, the same set of λi can satisfy the constraints Eqn. 59 and Eqn. 61 of bothw1 andw2. Becausew1 andw2 are activated by the same set of xi, this implies
βw1 = c · n∑ i=1 λi · xi · I ( w>1 xi ≥ 0 ) = c · n∑ i=1 λi · xi · I ( w>2 xi ≥ 0 ) = βw2
Since λi already satisfy constraint Eqn. 59 for w1, they also satisfy that for w2. Thus, it remains to show that βw1 + β ′ w1 · w1 = βw2 + β ′ w2 · w1 assuming βw1 = βw2 , β ′ w1 = w > 1 βw1 , and β′w2 = w > 2 βw2 . This indeed holds because
βw1 + β ′ w1 ·w1 = βw2 + β ′ w2 ·w2
⇐⇒ β′w1 ·w > 1 = β ′ w2 ·w > 2 ⇐⇒ w>1 βw1w>1 = w>2 βw2w>2 ⇐⇒ w>1 w1β>w1 = w > 2 w2β > w2 ⇐⇒ 1 · β>w1 = 1 · β > w2
⇐⇒ βw1 = βw1 Here, we used the fact that w1 and w2 are vectors on the unit sphere. This proves Claim (b).
Finally, we show (c) that Whenever we rotate a w1 to a w2 so that the set of xi being activated changed, we can still find λi that satisfy constraint of both w1 and w2. Suppose we rotate w1 to w2 so that w2 lost activation with x1,x2, ...,xp which in the set of linearly independent xi’s being activated byw1 and their opposite vectors −xi are also in the training set (without loss of generality). Then w2 must now also get activated by −x1,−x2, ...,−xp. This is because if w>2 xi < 0, we must have w>2 (−xi) > 0. Recall that in the proof of Claim (a), we only needed the λi from linearly independent xi that we used to solve the linear systems, and their opposite as the free variables to solve the linear system of
d equations. Hence, we can set λ to 0 for the other xi while still satisfying the linear system. Then, suppose there exists λi that satisfy
β(k)w1 = c · d∑ i=1 λi · x(k)i
where the xi are the linearly independent vectors that activatew1 with opposite vectors in the training set, which we have proved in (a). Then we can satisfy the constraint for βw2 below
β(k)w2 = c · p∑ i=1 λ̂i · (−xi)(k) + d∑ i=p+1 λi · x(k)i
by setting λ̂i = −λi for i = 1...p. Indeed, this gives
β(k)w2 = c · p∑ i=1 (−λi) · (−xi)(k) + d∑ i=p+1 λi · x(k)i
= c · d∑ i=1 λi · x(k)i
Thus, we can also find λi that satisfy the constraint for βw2 . Here, we do not consider the case where w2 is parallel with an xi because such w2 has measure zero. Note that we can apply this argument iteratively because the flipping the sign always works and will not create any inconsistency.
Moreover, we can show that the constraint for β′w2 is satisfied by a similar argument as in proof of Claim (b). This follows from the fact that our construction makes βw1 = βw2 . Then we can follow the same argument as in (b) to show that βw1 + β ′ w1 ·w1 = βw2 + β ′ w2 ·w1. This completes the proof of Claim (c).
In summary, combining Claim (a), (b) and (c) gives that Claim 1 holds. That is, given our training data, the global optimum to the constrained optimization problem of finding the min-norm solution among functions that fit the training data satisfies βw+β′w ·w = 2βg . We also showed that this claim implies exact extrapolation, i.e., the network’s learned function f(x) is equal to the true underlying function g(x) for all x ∈ Rd. This completes the proof.
B.3 PROOF OF THEOREM 2
Proof of the asymptotic convergence to extrapolation builds upon our proof of exact extrapolation, i.e., Lemma 1. The proof idea is that if the training data distribution has support at all directions, when the number of samples n→∞, asymptotically the training set will converge to some imaginary training set that satisfies the condition for exact extrapolation. Since if training data are close the neural tangent kernels are also close, the predictions or learned function will converge to a function that achieves perfect extrapolation, that is, the true underlying function.
Asymptotic convergence of data sets. We first show the training data converge to a data set that satisfies the exact extrapolation condition in Lemma 1. Suppose training data {xi}ni=1 are sampled from a distribution whose support contains a connected set S that intersects all directions, i.e., for any non-zero w ∈ Rd, there exists k > 0 so that kw ∈ S. Let us denote by S the set of datasets that satisfy the condition in Lemma 1. In fact, we will use a relaxed condition in the proof of Lemma 1 (Lemma 1 in the main text uses a stricter condition for simplicity of exposition). Given a general dataset X and a dataset S ∈ S of the same size n, let σ(X,S) denote a matching of their data points, i.e., σ outputs a sequence of pairs
σ(X,S)i = (xi, si) for i ∈ [n] s.t. X = {xi}ni=1
S = {si}ni=1
Let ` : Rd × Rd → R be the l2 distance that takes in a pair of points. We then define the distance between the datasets d(X,S) as the minimum sum of l2 distances of their data points over all
possible matching.
d(X,S) = minσ n∑ i=1 ` (σ (X,S)i) |X| = |S| = n
∞ |X| 6= |S|
We can then define a “closest distance to perfect dataset” function D∗ : X → R which maps a dataset X to the minimum distance ofX to any dataset in S
D∗ (X) = min S∈S d (X,S)
It is easy to see that for any datasetX = {xi}ni=1, D∗ (X) can be bounded by the minimum of the closest distance to perfect dataset D∗ of sub-datasets ofX of size 2d.
D∗ ({xi}ni=1) ≤ bn/2dc min k=1 D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) (62)
This is because for any S ∈ S, and any S ⊆ S′, we must have S′ ∈ S because a dataset satisfies exact extrapolation condition as long as it contains some key points. Thus, adding more data will not hurt, i.e., for anyX1 ⊆X2, we always have
D∗ (X1) ≤ D∗ (X2)
Now let us denote by Xn a random dataset of size n where each xi ∈ Xn is sampled from the training distribution. Recall that our training data {xi}ni=1 are sampled from a distribution whose support contains a connected set S∗ that intersects all directions, i.e., for any non-zerow ∈ Rd, there exists k > 0 so that kw ∈ S∗. It follows that for a random dataset X2d of size 2d, the probability that D∗(X2d) > happens is less than 1 for any > 0. First there must exist S0 = {si}2di=1 ∈ S of size 2d, e.g., orthogonal basis and their opposite vectors. Observe that if we scale any si by k > 0, the resulting dataset is still in S by the definition of S . We denote the set of datasets where we are allowed to scale elements of S0 by S0. It follows that
P (D∗(X2d) > ) = P (
min S∈S d (X2d,S) > ) ≤ P ( min S∈S0 d (X2d,S) >
) = P ( min S∈S0 min σ n∑ i=1 ` (σ (X2d,S)i) > )
= 1− P ( min S∈S0 min σ n∑ i=1 ` (σ (X2d,S)i) ≤ )
≤ 1− P (
min S∈S0 min σ n max i=1
` (σ (X2d,S)i) ≤ )
≤ δ < 1
where we denote the bound of P (D∗(X2d) > ) by δ < 1, and the last step follows from P (
min S∈S0 min σ n max i=1
` (σ (X2d,S)i) ≤ ) > 0
which further follows from the fact that for any si ∈ S0, by the assumption on training distribution, we can always find k > 0 so that ksi ∈ S∗, a connected set in the support of training distribution. By the connectivity of support S∗, ksi cannot be an isolated point in S∗, so for any > 0, we must have∫
‖x−ksi‖≤ ,x∈S∗
fX(x)dx > 0
Hence, we can now apply Eqn. 62 to bound D∗(Xn). Given any > 0, we have
P (D∗(Xn) > ) = 1− P (D∗(Xn) ≤ ) ≤ 1− P (bn/2dc
min k=1
D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) ≤ )
≤ 1− 1− bn/2dc∏ k=1 P ( D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) > )
= bn/2dc∏ k=1 P ( D∗ ( {xj}k∗2dj=(k−1)∗2d+1 ) > ) ≤ δbn/2dc
Here δ < 1. This implies D∗(Xn) p−→ 0, i.e.,
lim n→∞
P (D∗(Xn) > ) = 0 ∀ > 0 (63)
Eqn. 63 says as the number of training samples n→∞, our training set will converge in probability to a dataset that satisfies the requirement for exact extrapolation.
Asymptotic convergence of predictions. Let NTK(x,x′) : Rd × Rd → R denote the neural tangent kernel for a two-layer ReLU MLP. It is easy to see that if x → x∗, then NTK(x, ·) → NTK(x∗, ·) (Arora et al. (2019b)). Let NTKtrain denote the n× n kernel matrix for training data. We have shown that our training set converges to a perfect data set that satisfies conditions of exact extrapolation. Moreover, note that our training set will only have a finite number of (not increase with n) xi that are not precisely the same as those in a perfect dataset. This is because a perfect data only contains a finite number of key points and the other points can be replaced by any other points while still being a perfect data set. Thus, we have NTKtrain → N∗, where N∗ is the n× n NTK matrix for some perfect data set.
Because neural tangent kernel is positive definite, we have NTK−1train → N∗ −1
. Recall that for any x ∈ Rd, the prediction of NTK is
fNTK(x) = (NTK(x | 1. What are the key contributions and novel aspects introduced by the paper on deep learning research?
2. What are the strengths of the paper, especially in theoretical analyses and empirical evidence?
3. Do you have any suggestions for improving the paper, such as expanding certain analyses or exploring related domains?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
This paper tackles the challenging question of how deep networks might learn to extrapolate knowledge outside the support of their training distribution. The paper contributes both with novel theoretical arguments as well as with empirical evidence collected on targeted cases. Differently from other recent approaches to the problem, the theoretical analyses presented here are non-asymptotic and provide precise information about the kind of functions that MLPs can learn in the proximity of the training region. Moreover, the authors provide compelling arguments about the need to explicitly encoding (task-specific) non-linearities in the input representation and/or in the model architecture in order to promote successful extrapolation. Overall, the paper addresses important issues and can be considered at the frontier of deep learning research. The paper is well-written and the recent literature is properly reviewed. In light of this, I think the paper would be of interest to the ICLR community. However, I would like to explicitly mention that I was not able to carefully review all the details and proofs reported in the Appendix, which is of an unusual length (almost 40 pages) for an ICLR paper.
Comments for possible improvements:
The analyses reported in Appendix D.3 / C.4 regarding the extrapolation capability of MLPs with different activation functions (sin, tanh, quadratic) are relevant and should be emphasized. They could also be expanded, for example by considering some data generation tasks analyzed in the main text.
It would be very interesting to extend this analysis to other simple problems, where MLPs cannot extrapolate appropriately. I am specifically referring to the simple counting and arithmetic tasks discussed in [1], where generalization outside the training distribution was achieved by adding ad-hoc gate units to the network. I think this domain is particularly relevant here, given that arithmetics is mentioned by the authors in the opening sentence of the paper.
[1] A. Trask, F. Hill, S. Reed, J. Rae, C. Dyer, and P. Blunsom, “Neural Arithmetic Logic Units,” in arXiv:1808.00508, 2018. |
ICLR | Title
FRICATIVE PHONEME DETECTION WITH ZERO DELAY
Abstract
People with high-frequency hearing loss rely on hearing aids that employ frequency lowering algorithms. These algorithms shift some of the sounds from the high frequency band to the lower frequency band where the sounds become more perceptible for the people with the condition. Fricative phonemes have an important part of their content concentrated in high frequency bands. It is important that the frequency lowering algorithm is activated exactly for the duration of a fricative phoneme, and kept off at all other times. Therefore, timely (with zero delay) and accurate fricative phoneme detection is a key problem for high quality hearing aids. In this paper we present a deep learning based fricative phoneme detection algorithm that has zero detection delay and achieves state-of-the-art fricative phoneme detection accuracy on the TIMIT Speech Corpus. All reported results are reproducible and come with easy to use code that could serve as a baseline for future research. 1 FRICATIVE PHONEMES AND HIGH-FREQUENCY HEARING LOSS High-frequency hearing loss is a particular type of hearing loss that is mostly age related. People with high-frequency hearing loss have difficulties with word recognition in conversations. This is often caused by the loss of discrimination or detection ability of their ear for high-frequency sounds. Fricative phonemes are particularly hard to recognize for the affected people because these phonemes contain high-frequency information that is crucial to be able to discriminate them. The problem can persist if the affected people wear modern hearing aids. Researchers suggested frequency lowering approaches to make fricative phonemes audible by the hearing impaired. These techniques shift the high frequency parts of these sounds into lower frequency bands. For example, Robinson et al. (2009) show that a frequency transposition technique leads to improved detection and recognition ability in high frequency hearing impaired people for the sounds /s/ and /z/, which are fricative sounds. In this paper, we use the standard phonetic codes when we refer to the fricative phonemes as listed in Table 1. In order to employ such techniques in hearing aids, one must be able to reliably detect fricative phonemes with zero delay. This means that the algorithm needs to be able to detect that a fricative phoneme is coming next, before this has happened, so that the necessary alterations to the sound can be made in real time. In this paper we show how to use deep learning to solve this problem. Table 1: Fricative phonemes of English. PHONETIC CODE /s/ /sh/ /f/ /th/ /z/ /zh/ /v/ /dh/ EXAMPLE WORD sea she fin thin zone azure van then 2 DATASET, NETWORK ARCHITECTURE, AND TRAINING We employ a fully convolutional neural network with residual connections (He et al., 2016). The network takes raw speech segments as inputs, and calculates the posterior probability for the event that the sample at the end of the segment belongs to a fricative phoneme. The detection procedure is explained in Figure 1 where the raw speech segment is in blue, the input segment of the network is highlighted in green and the dashed green lines show the borders of the phonemes as marked in the TIMIT dataset (Garofolo et al., 1993). Based on the input segment, the network detects the identity
1 FRICATIVE PHONEMES AND HIGH-FREQUENCY HEARING LOSS
High-frequency hearing loss is a particular type of hearing loss that is mostly age related. People with high-frequency hearing loss have difficulties with word recognition in conversations. This is often caused by the loss of discrimination or detection ability of their ear for high-frequency sounds. Fricative phonemes are particularly hard to recognize for the affected people because these phonemes contain high-frequency information that is crucial to be able to discriminate them. The problem can persist if the affected people wear modern hearing aids. Researchers suggested frequency lowering approaches to make fricative phonemes audible by the hearing impaired. These techniques shift the high frequency parts of these sounds into lower frequency bands. For example, Robinson et al. (2009) show that a frequency transposition technique leads to improved detection and recognition ability in high frequency hearing impaired people for the sounds /s/ and /z/, which are fricative sounds. In this paper, we use the standard phonetic codes when we refer to the fricative phonemes as listed in Table 1. In order to employ such techniques in hearing aids, one must be able to reliably detect fricative phonemes with zero delay. This means that the algorithm needs to be able to detect that a fricative phoneme is coming next, before this has happened, so that the necessary alterations to the sound can be made in real time. In this paper we show how to use deep learning to solve this problem.
2 DATASET, NETWORK ARCHITECTURE, AND TRAINING
We employ a fully convolutional neural network with residual connections (He et al., 2016). The network takes raw speech segments as inputs, and calculates the posterior probability for the event that the sample at the end of the segment belongs to a fricative phoneme. The detection procedure is explained in Figure 1 where the raw speech segment is in blue, the input segment of the network is highlighted in green and the dashed green lines show the borders of the phonemes as marked in the TIMIT dataset (Garofolo et al., 1993). Based on the input segment, the network detects the identity
(belongs to a fricative phoneme or not) of the sound sample represented with the red vertical line at the end of the segment.
It is important to stress that the red line is at the end of the green segment, and not in the middle of the green segment. This means that we are not using non-causal information to detect the identity of the red sample. In other words, our method introduces zero statistical delay.
Observe that in Figure 1 the green segment spans four phonemes: /tcl/, /w/, /ix/, /z/ (see Table A.1 for an explanation of these phonetic codes). The red sample belongs to the phoneme /z/, which is a fricative phoneme, so that the correct detection in this case is fricative. Note that the fricative phoneme /z/ is barely contained in the green segment. Essentially, the network must learn to make accurate inferences about future phonemes based on the currently observed phonemes. Statistically, this is harder than making a detection about, for example, the middle point of the segment: think about the difference between interpolation and extrapolation.
2.1 DATASET AND DATA PREPROCESSING
The TIMIT Speech Corpus consists of raw speech sentences (at 16 kHz sampling rate) and their corresponding time aligned phonetic labels (Garofolo et al., 1993) as illustrated in Figure 1. It contains a total of 6300 sentences, 10 sentences spoken by each of 630 speakers from 8 major dialect regions of the United States. We used the entire training set of the TIMIT dataset for training (462 speakers), and the core test set for testing (24 speakers). We created the validation set (50 speakers) following the methodology in Halberstadt (1998).
In the TIMIT dataset, there are two sentences that are read by all the speakers; these are called SA sentences. We excluded these two sentences from all of the speakers in all sets (training, validation and test) to prevent a possible bias in the evaluation. Specifically, if SA sentences are not excluded, the model might conceivably memorize the locations of the phonemes in these sentences using the training set, and then make near perfect detection for these sentences in the test set. This would make the model appear to perform better than it does in reality. After the exclusion of SA sentences, there were 8 sentences per speaker left.
To keep the training and validation set balanced, we ensured a (approximately) 50/50 fricative-tonon-fricative sample ratio. To do this, we extracted 16 randomly placed 192 ms (3072 in samples) long speech segments from each sentence: 8 segments labeled as fricative phoneme and 8 segments labeled as non-fricative phoneme. Recall that the label for a segment is determined based on the identity of the last sample of the segment. If a sentence did not have a fricative phoneme sample, we generated 16 non-fricative segments; this procedure did not affect the (approximate) 50/50 fricativeto-non-fricative sample ratio, since there were not many sentences in training and validation set that had no fricative phoneme sample in them.
For each epoch, our training set consisted of 59136 (462 speakers x 8 sentences x 16 random segments) speech segments placed randomly as explained above. One such segment is highlighted in green in Figure 1. From epoch to epoch the training set of 59136 segments was randomly regenerated from the training set of the TIMIT dataset.
We applied the same procedure to form our validation set, but we kept it fixed for different training runs and also for different epochs in same training run. Our validation samples consisted of 6400 (50 speakers x 8 sentences x 16 segments) speech segments.
For each segment x = [x1, . . . , x3072] from training, validation and test sets, we applied standard deviation normalization x← x/ √ Var(x), where Var(x) is the variance of the segment.
2.2 NETWORK ARCHITECTURE
Table 2 shows the layer structure of the model we used. The input layer of the network takes raw speech segments of 3072 samples (192 ms at 16 kHz sampling rate). The input layer is followed by 5 stages. Stage 1 consists of one convolutional layer, stages 2–5 consist of 6 convolutional layers each. For each convolutional layer, in Table 2, the first number in the square brackets indicates the kernel size of the layer, the number after “/” indicates the stride value in the convolution, and the last number indicates the number of filters used in the layer. The strides are only applied to the first convolutional layer of each stage. All of the layers have ReLU non-linearity, except the single neuron at the output layer that has the sigmoid non-linearity. In stages 2–5 we used residual connections (He et al., 2016). If the filter dimensions do not match between two layers for residual connection, we applied zero padding to match the filter dimensions (some residual connections had to be applied among two layers that had different number of filters, since we increased the number of filters as we went deeper in the model). The residual connections were applied around pairs of convolutional layers, connecting the input to the first layer of the pair to the output of the linear filter of the second layer of the pair, immediately before the non-linearity of the second layer. After the convolutional stages, 1D global average pooling layer was used. The total number of parameters of this model is 1.1 million. See Figure A.1 in Appendix for details of the model.
2.3 ALTERNATIVE ARCHITECTURES
We experimented widely with the network architecture: the input segment size, the number of layers, the number of features. The network in Table 2 achieved the best performance in our experiments. As can be seen in Figure 2 below, the input size of 3072 samples corresponds to 2-3 phonemes. Intuitively, this provides abundant information to infer the phoneme that follows. Performance of more shallow networks and networks with fewer features is discussed in Section 4.
Our network receives the raw speech signal as input. The use of raw waveform (see also Palaz et al. (2013); Zeghidour et al. (2017)) enables us to learn problem-specific low-level features. Nevertheless, in audio processing it is more common to first transform the raw signal into a time-frequency representation, for example, by computing the Mel-Frequency Cepstrum Coefficients (MFCC), and then to feed the resulting features into a classifier. Because of the time-frequency uncertainty principle, the computation of MFCC features requires a window of certain size, often 25 ms. To compensate for the delay incurred by processing the signal in the window, the classifier must learn to extrapolate detections into the future. While this is possible, as explained in Section 5 below, in general it is a harder task that to make predictions about the present sample. In a separate line of work, we used a proprietary high-quality time-frequency filterbank, followed by a recurrent neural network, to solve the fricative detection problem. After extensive experimentation, we were able to get within a few percent of the performance reported in this paper, however, we were not able to surpass the performance of the model in Table 2. The advantage of the time-frequency transformation followed by a recurrent neural network, is that that approach leads to a much more computationally efficient implementation. The value of this work is to push the boundary of what is possible in terms of detection accuracy. We plan to report the somewhat inferior but computationally more efficient results in a separate publication.
2.4 TRAINING
We used binary crossentropy loss and Adam optimizer starting with 0.001 learning rate. We halved the learning rate if the validation loss did not decrease within 10 epochs, and we applied early stop-
ping if the loss computed on the validation set did not decrease within 40 epochs. We applied 0.0001 l2 weight decay to all of the weights in the convolutional layers, and applied batch normalization after each linear filtering operation and before each non-linearity. See Figure A.1 in Appendix for details on batch normalization. For the results reported in Tables 3, 4 and 5 and Figures A.3 and A.4 we trained the network 10 times and selected the network that performed best on the validation set; for the results reported in Table 7 we trained the network once only.
3 EVALUATION
We evaluated the performance of our detection algorithm with the test set (the core test of the TIMIT dataset) that contains 192 sentences with 24 different speakers (each speaker has 8 sentences since we excluded two SA sentences as explained in Section 2.1). The total duration of the test set is 545 seconds.
Figure 2 explains how we use the trained network to make inferences. The raw speech is displayed in blue in the top display. We take a sliding window that consists of 3072 samples (192 ms), represented in black in the top display. The window is moved from left to right, one sample at a time. For each location of the sliding window, we apply the network to the speech segment marked by the window. This gives us the posterior probability for the event that the sample at the end of the sliding window belongs to a fricative phoneme. The posterior probabilities are displayed in blue in the bottom display.
When we compare the network’s detections with the ground truth displayed in orange, we observe that the network was able to detect the beginning and the end of the fricative phoneme regions accurately, even though it did not have access to the future context.
To make the final decision based on the posterior probabilities produced by the network, we apply thresholding as follows:
Decision = { Fricative, if p(x) > threshold Non-fricative, otherwise.
(1)
Above, p(x) (the blue line in the lower display in Figure 2) is the posterior probability for the event that the last sample of x (segment in black box in the upper display in Figure 2) belongs to a fricative phoneme. We used the validation set to decide a threshold that maximized the Unweighted Average Recall (UAR) of our network. UAR is the average recall of the two classes (fricative and non-fricative phonemes):
UAR = Recallf +Recalln
2 , Recallf =
TP
TP + FN , Recalln =
TN
TN + FP .
Above, TP , TN , FP , and FN is the number of true positive, true negative, false positive, and false negative detections of fricative phonemes, respectively; Recallf and Recalln is the recall rate for fricative and non-fricative phonemes, respectively. We used UAR as our metric since the test set was highly unbalanced (see # of Samples column in Table 3).
We also report the precision rates and the F1-scores (i ∈ {f, n}):
Precisionf = TP
TP + FP , Precisionn =
TN
TN + FN , F1,i = 2× Precisioni ×Recalli Precisioni +Recalli .
Table 3 summarizes the detection results with the selected threshold (0.42). Overall 94.76% UAR was achieved. To the best of our knowledge, this is the best UAR for fricative phoneme detection on the TIMIT dataset.
3.1 COMPARISON WITH THE LITERATURE
In this section we compared our algorithm with other results in phoneme detection literature. We first summarized the literature that focused only on fricatives phoneme detection (as in this work), then we briefly summarized the literature that focused on generic phoneme identification. Finally, we summarized all the results in one table.
3.1.1 FRICATIVE PHONEME DETECTION
Fricative phoneme detection is the task of detecting all occurrences of fricative phonemes in continuous speech. Vydana & Vuppala (2016) investigated the fricative phoneme detection using a combination of S-Transform and Linear Prediction Coefficients (LPC) approach. In their evaluation, the authors used a post-processing rule to remove spurious detections: if the duration of the detected evidence was shorter than 10 ms, then this evidence was not considered as fricative phoneme. The authors applied majority voting to each phoneme, so that a single detection is made only at the end of each phoneme. Further, the authors evaluated their algorithm by using 40 sentences from the TIMIT dataset, without excluding the SA sentences (see Section 2.1). This, we believe, might have introduced a bias in the reported detection rate making it too optimistic, since the SA sentences are same for all speakers in the dataset and must have appeared both in the training set and in the test set. The authors reported 83.81% recall rate for the fricative phoneme class and did not report the recall rate for the non-fricative class.
Ruinskiy & Lavner (2014) investigated the fricative phoneme detection by combining time domain and frequency domain features that were fed into a linear classifier. The detections were postprocessed by using median filtering to avoid short gaps within a sequence of positive detections, and to eliminate detections that were too short to be considered as whole phonemes. The authors applied majority voting for each phoneme and only evaluated their algorithm with unvoiced fricative phonemes (/s/, /sh/, /f/, /th/) and not with all 8 fricative phonemes from Table 1. In the evaluation stage, the authors did not exclude the SA sentences, which, as stated above, might have introduced a bias in the reported detection rate making it too optimistic. To the best of our knowledge, this reached the best fricative phoneme detection rate reported on the TIMIT dataset to date: 94.08% UAR.
The non-causal post-processing that was used in Vydana & Vuppala (2016) and Ruinskiy & Lavner (2014) might be suitable for some applications, such as for speech-to-text conversion. However it is not suitable for signal processing in hearing aids, because, as explained in Section 1 these devices must work in real time with minimal delay.
3.1.2 GENERIC PHONEME IDENTIFICATION
The generic phoneme identification task is similar to the fricative phoneme detection task, with the only difference that the aim is to identify all phonemes individually. This task is more challenging than the fricative phoneme detection since the number of classes is higher (total-number-ofphonemes-many classes vs. two classes).
It was widely believed that the processing of the input context forward and backward in time from the speech frame to be identified is crucial for success in phoneme identification (Graves & Schmidhuber, 2005). Hence, almost all of phoneme identification studies relied on a system where the speech frame to be identified was located in the middle of a larger input context used by the algorithm to make the decision. This comes with a delay in the phoneme identification. For example, if the input to the algorithm consists of N successive overlapping short speech frames (typically 25 ms long each with overlap of 10 ms) and the detection about the middle frame is made, a delay of (N − 1)/2 is accumulated, assuming N is odd. This is illustrated in Figure A.5.
Siniscalchi et al. (2011) and Yu et al. (2012) utilized a single hidden layer Multi Layer Perceptron (MLP) and a Deep Neural Network (DNN), respectively, where the input context had 11 frames and the frame in the middle was identified. Therefore, there were 5 frames delay in the identification. Unfortunately, the authors did not report the frame length in seconds. The authors achieved 93.17% and 96.2% recall rates for fricative phonemes on the Nov92 dataset. Siniscalchi et al. (2013) utilized a DNN, using 310 ms (total time span from Frame 1 to Frame N in Figure A.5) input context that was centered around the frame being detected leading to a 155 ms delay. The authors reported 95.4% recall rate for fricative phonemes on the Nov92 dataset. Chen et al. (2014) employed a DNN, where the input context consisted of 9 frames that was centered around the frame being detected. The authors reported 88.2% recall rate for fricative phonemes on the Switchboard part of the NIST 2000 Hub5 dataset. All these methods introduce significant detection delays and the reported rates are not directly comparable to ours because the rates are not provided for the TIMIT dataset. For convenience of the reader we assembled a summary of these results in Table B.1 in the appendix. Zhang et al. (2019); Chorowski et al. (2015); Tóth (2015) addressed the task of generic phoneme detection on TIMIT using neural networks, unfortunately a comparison with this work is not possible since no fricative-specific results are reported.
Vydana & Vuppala (2016) and Ruinskiy & Lavner (2014) aimed at fricative phoneme detection and evaluated their algorithm on the TIMIT dataset as in this work. Table 4 compares the reported results in these two approaches to ours. Ruinskiy & Lavner (2014) is a paper that is most closely related to ours, which calls for a detailed comparison provided next.
The authors of Ruinskiy & Lavner (2014) only attempted to detect unvoiced fricative phonemes: /s/, /sh/, /f/ and /th/ and applied majority voting in their evaluation. To make a fair comparison, we also tested our detection with unvoiced fricative phonemes only and also applied majority voting. Table 5 summarizes the results. A further comparison in terms of Receiver Operating Characteristic (ROC) curves between our results and those of Ruinskiy & Lavner (2014) is reported in Figure A.3 in Appendix.
Based on the table, we observed the following.
3.1.3 DETECTION OF /V/ AND /DH/
Our detection performance increased when fricatives /v/ and /dh/ were excluded from consideration as was done in Ruinskiy & Lavner (2014), since these two phonemes are voiced fricative phonemes. The reason for this is well-known: the fricative phonemes /v/ and /dh/ have a vowel-like structure, and can be easily confused with vowels, reducing Recallf . Figure 3 shows the resemblance of the fricative phonemes /v/ and /dh/ with the vowel phonemes /iy/ in beet and /ux/ in toot in timefrequency plain.
Figure 4 demonstrates the effect of this resemblance in the detections of our network: In Figure 4a, the posterior probability for fricativity of phoneme /dh/ is not as high as that for fricativity of phoneme /f/. Same problem exists for /v/ as compared to /z/ in Figure 4b.
3.1.4 MAJORITY VOTING
Majority voting in the evaluation makes a single decision for a phoneme based on all detections for the individual samples constituting that phoneme: if more than half of the samples in the phoneme are classified as fricative by the network, then the phoneme itself is classified as fricative.
Majority-voting-based post-processing introduces a delay equal to the duration of a phoneme making it useless for applications in hearing aids. We applied this post-processing to the detections of our network to make a direct and fair comparison with Ruinskiy & Lavner (2014). As expected, with the post-processing the performance increased to 96.41% UAR rate. This is about 39% decrease in error rate as compared to 94.08% UAR of Ruinskiy & Lavner (2014), a substantial improvement. We stress that even without any post-processing and evaluated with all 8 fricative phonemes our result (94.76% UAR rate) is still better than those of Ruinskiy & Lavner (2014) that does use postprocessing and evaluate only with unvoiced fricative phonemes, albeit by not as much.
4 COMPUTATIONAL CONSIDERATIONS
In this paper when we claim that our method introduces zero delay, we make a statistical statement: Detection for the current sample is based only on the past signal. So far, we ignored the computational aspect of the problem: Making inference using a neural network takes time for computation. This introduces a certain delay that depends primarily on how powerful the computer is, how large the neural network is, and how optimized the software is.
On the one hand, the processor in a hearing aid is much weaker than that in a modern powerful GPU. On the other hand, one can envision installing a dedicated low-power chip for neural network-based
inference into future hearing aids; modern phones already contain such chips. No matter what the computational infrastructure is, a smaller model is always more efficient. The basic model described in Table 2 consists of 25 convolutional layers; it is called Net25. We constructed two smaller models as follows. The model Net25h is obtained from Net25 by reducing the number of features in each layer by a factor of two (h stands for half). The model Net19 is obtained from Net25 by removing the Convolutional Stage 5; details of these networks are given in Figure A.2 in the Appendix for reference. Table 6 summarizes the detection rates of the three models, the number of parameters in each model, and the processing time per sample for the Keras implementation executed on Nvidia Quadro P1000 GPU. One can see that Net25h has more than three times fewer parameters as Net25, about 35% faster processing speed, and its UAR rate is about 6% worse than that of Net25.
Still, there will always be a nonzero time that is necessary to perform the computation. Since we are targeting zero delay, we must accommodate the extra time needed. A natural solution is to try to extrapolate our detections into the future. This is done in the next section.
5 EXTRAPOLATING DETECTIONS INTO THE FUTURE
In this section, we use our network to extrapolate fricative phoneme detections into the future. In Figure 5, the green highlighted segment represents the input to the network. The goal of the network is to use this input to calculate the probability that a sample (marked in orange) located a couple of ms in the future, beyond the right boundary of the green segment (marked in red), belongs to a fricative phoneme.
To accomplish this goal we retrain the network with the data that is relabeled accordingly. For example, the label for the segment highlighted in green in Figure 5 would be based on the identity of the orange sample, not on the identity of the red sample as was done previously.
We considered four different values for the extrapolation gap: detecting 1ms, 2ms, 3ms, and 4 ms ahead into the future. This corresponds to the time difference between the orange line (future) and the red line (present time) in Figure 5.
Except for the relabeling of the training set, the training is exactly the same as before. The inference is also unchanged, except that to make a detection for the sample at orange line, we fed to the network the green highlighted segment in Figure 5. Table 7 reports the rates for the fricative phoneme detection as a function of the extrapolation gap.
Table 7 clearly shows that the performance decays very gracefully with the increasing extrapolation gap. With the extrapolation gap of 1-2ms we already have enough computational time for inference. Even at 3ms extrapolation gap and solving a harder problem of detecting all 8 fricative phonemes our method still outperforms the method of Ruinskiy & Lavner (2014) in UAR.
6 CONCLUSION
In this work, we provided a state-of-the-art baseline for fricative phoneme detection and demonstrated that this problem may be solved with zero delay. Further, we demonstrated that it is feasible to extrapolate fricative phoneme detections into the future, creating sufficient computational time for inference. Among other things, these results are important for building the future generation of hearing aids. Our results are fully reproducible and we make the complete source code available. We also include all the trained models that were used in evaluations in this paper.
The following interesting research directions remain. Restructure the computational graph of the evaluation pipeline to take advantage of the time-series-like structure of the data and avoid redundant computations. Reduce the size of the network to make it more suitable for a low-power device (hearing aids). This may be achieved, for example, using deep learning compression techniques: Pruning, compression through quantization, knowledge transfer through distillation or teacher-student approach, using memory efficient models such as fire module or depth wise convolutions.
A SUPPLEMENTARY DETAILS
Table A.1: Phonemes in Figure 1.
PHONETIC CODE /ih/ /tcl/ /w/ /ix/ /z/ EXAMPLE WORD bit it way debit zone
Input [3072,1]
32 conv, 48
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 80
8 conv, 80
8 conv, 80
8 conv, 80
8 conv, 80
8 conv, 80
8 conv, 96
8 conv, 96
8 conv, 96
8 conv, 96
8 conv, 96
8 conv, 96
Global Average
Output
Convolution Stage 1
Convolution Stage 2
Convolution Stage 3
Convolution Stage 4
Convolution Stage 5
Batch normalization
Weight layer
ReLU non-linearity
Batch normalization
Weight layer
ReLU non-linearity
+
Figure A.1: Left: Detailed network architecture used in the main paper consists of 25 convolutional layers; it is called Net25. The dashed lines show the downsampling for residual connections and Global Average represents 1D Global Averaging. Right: Illustration of residual connection; weight layer represents the convolutional layer without non-linearity.
Apart from the main network displayed in Figure A.1, we considered smaller model sizes displayed in Figure A.2.
B DETECTING PHONEME BOUNDARIES
Our networks were optimized to solve the two class classification problem on the level of samples: does the current sample belong to a fricative phoneme or not? A related problem is to precisely identify the boundaries of fricative phonemes. A good metric to measure the quality of boundary identification has been proposed in Räsänen et al. (2009). The metric is called R-value and it is defined as follows:
Rval = 1− √ (1−R)2 +OS2 + ∣∣∣Recallfb−1−OS√ 2 ∣∣∣ 2 , OS = Recallfb Precisionfb − 1. (2)
where Recallfb and Precisionfb denote the recall and the precision in identifying the fricative boundary correctly. The boundary is identified correctly if it lies no further than thresholdb milliseconds from a fricative boundary in the ground truth data. Each ground truth boundary may only match a single boundary in the data.
Interesting work on identifying phoneme boundaries precisely (not specializing to fricative phonemes) has been done in Franke et al. (2016); Michel et al. (2017); Adi et al. (2016). In the table below we report the performance of Net25 on this task for fricative phonemes. We note that
our network was not trained for this task and the results do not seem to be state-of-the-art. We leave it to future work to identify and remove the dominant sources of errors in this task and to understand how critical the precise boundary detection is for the applications in hearing aid devices. | 1. What is the main contribution of the paper regarding fricative phonemes boundary detection?
2. What are the strengths and weaknesses of the proposed method compared to prior works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Do you have any concerns regarding the choice of feature extraction method or the comparison with other baselines?
5. Can you provide more details about the experiment setup and the results interpretation? | Review | Review
====================================== Updated Review =====================================
I would like to thank the authors for providing more experiments and details regarding their work.
However, after reading the authors rebuttal, I still think that there is more work to do in terms of comparison to prior work. That way it would be much clearer what is contribution of this work, and how it can be used for future research in that field.
Hence, I would like to keep my score as is.
==========================================================================================
This paper describes a method for fricative phonemes boundary detection with zero delays. The authors suggest optimizing a convolutional based neural network with a binary cross loss function to detect such events.
The authors provide results on the TIMIT dataset and compare the proposed model to several baselines.
The task of phoneme boundary detection was well studies under different setups and is very important for various applications, including the one proposed in this paper.
However, I have some major concerns regarding this paper, which I would like the authors to clarify. Without these, it is hard to understand the contribution in this paper.
1) the authors chose to model the problem from the raw wave. Although it is getting popularity in several speech processing tasks, it is not clear why not using magnitude/MFCC, for example. In case the authors claim that learning from the waveform is better, I suggest providing a comparison to other features.
Additionally, did the authors experience with simpler architectures Maybe more shallow models? Regarding supervision, did the authors tried comparing to the method proposed by [2] but with a unidirectional RNN? Similar to [3].
2) If I understand it correctly, the motivation for this task was: accurate detection of fricatives boundary can be used to shift into lower frequency bands in hearing aids. It seems like the boundaries are more important than other phoneme parts such as a mid phoneme, for example.
In that case, a better metric might be Presicion + Recall + F1 + R-val next to the boundaries (for instance, with a tolerance level of 10-20ms). Those metrics were suggested on several studies of phoneme segmentation, [1], [2].
3) The comparison in Table 3 is very strange. Results are reported on different datasets. Although the authors mentioned it in the caption, it is still misleading. I suggest the authors to compare either obtain results on the same benchmark or compare to other baselines.
Minor comments:
"If for the majority of the samples in a phoneme our network’s output is greater than the threshold we set" -> not a clear sentence.
[1] Franke, Joerg, et al. "Phoneme boundary detection using deep bidirectional lstms." Speech Communication; 12. ITG Symposium. VDE, 2016.
[2] Michel, Paul, et al. "Blind phoneme segmentation with temporal prediction errors." arXiv preprint arXiv:1608.00508 (2016).
[3] Adi, Yossi, et al. "Automatic Measurement of Voice Onset Time and Prevoicing Using Recurrent Neural Networks." INTERSPEECH. 2016. |
ICLR | Title
FRICATIVE PHONEME DETECTION WITH ZERO DELAY
Abstract
People with high-frequency hearing loss rely on hearing aids that employ frequency lowering algorithms. These algorithms shift some of the sounds from the high frequency band to the lower frequency band where the sounds become more perceptible for the people with the condition. Fricative phonemes have an important part of their content concentrated in high frequency bands. It is important that the frequency lowering algorithm is activated exactly for the duration of a fricative phoneme, and kept off at all other times. Therefore, timely (with zero delay) and accurate fricative phoneme detection is a key problem for high quality hearing aids. In this paper we present a deep learning based fricative phoneme detection algorithm that has zero detection delay and achieves state-of-the-art fricative phoneme detection accuracy on the TIMIT Speech Corpus. All reported results are reproducible and come with easy to use code that could serve as a baseline for future research. 1 FRICATIVE PHONEMES AND HIGH-FREQUENCY HEARING LOSS High-frequency hearing loss is a particular type of hearing loss that is mostly age related. People with high-frequency hearing loss have difficulties with word recognition in conversations. This is often caused by the loss of discrimination or detection ability of their ear for high-frequency sounds. Fricative phonemes are particularly hard to recognize for the affected people because these phonemes contain high-frequency information that is crucial to be able to discriminate them. The problem can persist if the affected people wear modern hearing aids. Researchers suggested frequency lowering approaches to make fricative phonemes audible by the hearing impaired. These techniques shift the high frequency parts of these sounds into lower frequency bands. For example, Robinson et al. (2009) show that a frequency transposition technique leads to improved detection and recognition ability in high frequency hearing impaired people for the sounds /s/ and /z/, which are fricative sounds. In this paper, we use the standard phonetic codes when we refer to the fricative phonemes as listed in Table 1. In order to employ such techniques in hearing aids, one must be able to reliably detect fricative phonemes with zero delay. This means that the algorithm needs to be able to detect that a fricative phoneme is coming next, before this has happened, so that the necessary alterations to the sound can be made in real time. In this paper we show how to use deep learning to solve this problem. Table 1: Fricative phonemes of English. PHONETIC CODE /s/ /sh/ /f/ /th/ /z/ /zh/ /v/ /dh/ EXAMPLE WORD sea she fin thin zone azure van then 2 DATASET, NETWORK ARCHITECTURE, AND TRAINING We employ a fully convolutional neural network with residual connections (He et al., 2016). The network takes raw speech segments as inputs, and calculates the posterior probability for the event that the sample at the end of the segment belongs to a fricative phoneme. The detection procedure is explained in Figure 1 where the raw speech segment is in blue, the input segment of the network is highlighted in green and the dashed green lines show the borders of the phonemes as marked in the TIMIT dataset (Garofolo et al., 1993). Based on the input segment, the network detects the identity
1 FRICATIVE PHONEMES AND HIGH-FREQUENCY HEARING LOSS
High-frequency hearing loss is a particular type of hearing loss that is mostly age related. People with high-frequency hearing loss have difficulties with word recognition in conversations. This is often caused by the loss of discrimination or detection ability of their ear for high-frequency sounds. Fricative phonemes are particularly hard to recognize for the affected people because these phonemes contain high-frequency information that is crucial to be able to discriminate them. The problem can persist if the affected people wear modern hearing aids. Researchers suggested frequency lowering approaches to make fricative phonemes audible by the hearing impaired. These techniques shift the high frequency parts of these sounds into lower frequency bands. For example, Robinson et al. (2009) show that a frequency transposition technique leads to improved detection and recognition ability in high frequency hearing impaired people for the sounds /s/ and /z/, which are fricative sounds. In this paper, we use the standard phonetic codes when we refer to the fricative phonemes as listed in Table 1. In order to employ such techniques in hearing aids, one must be able to reliably detect fricative phonemes with zero delay. This means that the algorithm needs to be able to detect that a fricative phoneme is coming next, before this has happened, so that the necessary alterations to the sound can be made in real time. In this paper we show how to use deep learning to solve this problem.
2 DATASET, NETWORK ARCHITECTURE, AND TRAINING
We employ a fully convolutional neural network with residual connections (He et al., 2016). The network takes raw speech segments as inputs, and calculates the posterior probability for the event that the sample at the end of the segment belongs to a fricative phoneme. The detection procedure is explained in Figure 1 where the raw speech segment is in blue, the input segment of the network is highlighted in green and the dashed green lines show the borders of the phonemes as marked in the TIMIT dataset (Garofolo et al., 1993). Based on the input segment, the network detects the identity
(belongs to a fricative phoneme or not) of the sound sample represented with the red vertical line at the end of the segment.
It is important to stress that the red line is at the end of the green segment, and not in the middle of the green segment. This means that we are not using non-causal information to detect the identity of the red sample. In other words, our method introduces zero statistical delay.
Observe that in Figure 1 the green segment spans four phonemes: /tcl/, /w/, /ix/, /z/ (see Table A.1 for an explanation of these phonetic codes). The red sample belongs to the phoneme /z/, which is a fricative phoneme, so that the correct detection in this case is fricative. Note that the fricative phoneme /z/ is barely contained in the green segment. Essentially, the network must learn to make accurate inferences about future phonemes based on the currently observed phonemes. Statistically, this is harder than making a detection about, for example, the middle point of the segment: think about the difference between interpolation and extrapolation.
2.1 DATASET AND DATA PREPROCESSING
The TIMIT Speech Corpus consists of raw speech sentences (at 16 kHz sampling rate) and their corresponding time aligned phonetic labels (Garofolo et al., 1993) as illustrated in Figure 1. It contains a total of 6300 sentences, 10 sentences spoken by each of 630 speakers from 8 major dialect regions of the United States. We used the entire training set of the TIMIT dataset for training (462 speakers), and the core test set for testing (24 speakers). We created the validation set (50 speakers) following the methodology in Halberstadt (1998).
In the TIMIT dataset, there are two sentences that are read by all the speakers; these are called SA sentences. We excluded these two sentences from all of the speakers in all sets (training, validation and test) to prevent a possible bias in the evaluation. Specifically, if SA sentences are not excluded, the model might conceivably memorize the locations of the phonemes in these sentences using the training set, and then make near perfect detection for these sentences in the test set. This would make the model appear to perform better than it does in reality. After the exclusion of SA sentences, there were 8 sentences per speaker left.
To keep the training and validation set balanced, we ensured a (approximately) 50/50 fricative-tonon-fricative sample ratio. To do this, we extracted 16 randomly placed 192 ms (3072 in samples) long speech segments from each sentence: 8 segments labeled as fricative phoneme and 8 segments labeled as non-fricative phoneme. Recall that the label for a segment is determined based on the identity of the last sample of the segment. If a sentence did not have a fricative phoneme sample, we generated 16 non-fricative segments; this procedure did not affect the (approximate) 50/50 fricativeto-non-fricative sample ratio, since there were not many sentences in training and validation set that had no fricative phoneme sample in them.
For each epoch, our training set consisted of 59136 (462 speakers x 8 sentences x 16 random segments) speech segments placed randomly as explained above. One such segment is highlighted in green in Figure 1. From epoch to epoch the training set of 59136 segments was randomly regenerated from the training set of the TIMIT dataset.
We applied the same procedure to form our validation set, but we kept it fixed for different training runs and also for different epochs in same training run. Our validation samples consisted of 6400 (50 speakers x 8 sentences x 16 segments) speech segments.
For each segment x = [x1, . . . , x3072] from training, validation and test sets, we applied standard deviation normalization x← x/ √ Var(x), where Var(x) is the variance of the segment.
2.2 NETWORK ARCHITECTURE
Table 2 shows the layer structure of the model we used. The input layer of the network takes raw speech segments of 3072 samples (192 ms at 16 kHz sampling rate). The input layer is followed by 5 stages. Stage 1 consists of one convolutional layer, stages 2–5 consist of 6 convolutional layers each. For each convolutional layer, in Table 2, the first number in the square brackets indicates the kernel size of the layer, the number after “/” indicates the stride value in the convolution, and the last number indicates the number of filters used in the layer. The strides are only applied to the first convolutional layer of each stage. All of the layers have ReLU non-linearity, except the single neuron at the output layer that has the sigmoid non-linearity. In stages 2–5 we used residual connections (He et al., 2016). If the filter dimensions do not match between two layers for residual connection, we applied zero padding to match the filter dimensions (some residual connections had to be applied among two layers that had different number of filters, since we increased the number of filters as we went deeper in the model). The residual connections were applied around pairs of convolutional layers, connecting the input to the first layer of the pair to the output of the linear filter of the second layer of the pair, immediately before the non-linearity of the second layer. After the convolutional stages, 1D global average pooling layer was used. The total number of parameters of this model is 1.1 million. See Figure A.1 in Appendix for details of the model.
2.3 ALTERNATIVE ARCHITECTURES
We experimented widely with the network architecture: the input segment size, the number of layers, the number of features. The network in Table 2 achieved the best performance in our experiments. As can be seen in Figure 2 below, the input size of 3072 samples corresponds to 2-3 phonemes. Intuitively, this provides abundant information to infer the phoneme that follows. Performance of more shallow networks and networks with fewer features is discussed in Section 4.
Our network receives the raw speech signal as input. The use of raw waveform (see also Palaz et al. (2013); Zeghidour et al. (2017)) enables us to learn problem-specific low-level features. Nevertheless, in audio processing it is more common to first transform the raw signal into a time-frequency representation, for example, by computing the Mel-Frequency Cepstrum Coefficients (MFCC), and then to feed the resulting features into a classifier. Because of the time-frequency uncertainty principle, the computation of MFCC features requires a window of certain size, often 25 ms. To compensate for the delay incurred by processing the signal in the window, the classifier must learn to extrapolate detections into the future. While this is possible, as explained in Section 5 below, in general it is a harder task that to make predictions about the present sample. In a separate line of work, we used a proprietary high-quality time-frequency filterbank, followed by a recurrent neural network, to solve the fricative detection problem. After extensive experimentation, we were able to get within a few percent of the performance reported in this paper, however, we were not able to surpass the performance of the model in Table 2. The advantage of the time-frequency transformation followed by a recurrent neural network, is that that approach leads to a much more computationally efficient implementation. The value of this work is to push the boundary of what is possible in terms of detection accuracy. We plan to report the somewhat inferior but computationally more efficient results in a separate publication.
2.4 TRAINING
We used binary crossentropy loss and Adam optimizer starting with 0.001 learning rate. We halved the learning rate if the validation loss did not decrease within 10 epochs, and we applied early stop-
ping if the loss computed on the validation set did not decrease within 40 epochs. We applied 0.0001 l2 weight decay to all of the weights in the convolutional layers, and applied batch normalization after each linear filtering operation and before each non-linearity. See Figure A.1 in Appendix for details on batch normalization. For the results reported in Tables 3, 4 and 5 and Figures A.3 and A.4 we trained the network 10 times and selected the network that performed best on the validation set; for the results reported in Table 7 we trained the network once only.
3 EVALUATION
We evaluated the performance of our detection algorithm with the test set (the core test of the TIMIT dataset) that contains 192 sentences with 24 different speakers (each speaker has 8 sentences since we excluded two SA sentences as explained in Section 2.1). The total duration of the test set is 545 seconds.
Figure 2 explains how we use the trained network to make inferences. The raw speech is displayed in blue in the top display. We take a sliding window that consists of 3072 samples (192 ms), represented in black in the top display. The window is moved from left to right, one sample at a time. For each location of the sliding window, we apply the network to the speech segment marked by the window. This gives us the posterior probability for the event that the sample at the end of the sliding window belongs to a fricative phoneme. The posterior probabilities are displayed in blue in the bottom display.
When we compare the network’s detections with the ground truth displayed in orange, we observe that the network was able to detect the beginning and the end of the fricative phoneme regions accurately, even though it did not have access to the future context.
To make the final decision based on the posterior probabilities produced by the network, we apply thresholding as follows:
Decision = { Fricative, if p(x) > threshold Non-fricative, otherwise.
(1)
Above, p(x) (the blue line in the lower display in Figure 2) is the posterior probability for the event that the last sample of x (segment in black box in the upper display in Figure 2) belongs to a fricative phoneme. We used the validation set to decide a threshold that maximized the Unweighted Average Recall (UAR) of our network. UAR is the average recall of the two classes (fricative and non-fricative phonemes):
UAR = Recallf +Recalln
2 , Recallf =
TP
TP + FN , Recalln =
TN
TN + FP .
Above, TP , TN , FP , and FN is the number of true positive, true negative, false positive, and false negative detections of fricative phonemes, respectively; Recallf and Recalln is the recall rate for fricative and non-fricative phonemes, respectively. We used UAR as our metric since the test set was highly unbalanced (see # of Samples column in Table 3).
We also report the precision rates and the F1-scores (i ∈ {f, n}):
Precisionf = TP
TP + FP , Precisionn =
TN
TN + FN , F1,i = 2× Precisioni ×Recalli Precisioni +Recalli .
Table 3 summarizes the detection results with the selected threshold (0.42). Overall 94.76% UAR was achieved. To the best of our knowledge, this is the best UAR for fricative phoneme detection on the TIMIT dataset.
3.1 COMPARISON WITH THE LITERATURE
In this section we compared our algorithm with other results in phoneme detection literature. We first summarized the literature that focused only on fricatives phoneme detection (as in this work), then we briefly summarized the literature that focused on generic phoneme identification. Finally, we summarized all the results in one table.
3.1.1 FRICATIVE PHONEME DETECTION
Fricative phoneme detection is the task of detecting all occurrences of fricative phonemes in continuous speech. Vydana & Vuppala (2016) investigated the fricative phoneme detection using a combination of S-Transform and Linear Prediction Coefficients (LPC) approach. In their evaluation, the authors used a post-processing rule to remove spurious detections: if the duration of the detected evidence was shorter than 10 ms, then this evidence was not considered as fricative phoneme. The authors applied majority voting to each phoneme, so that a single detection is made only at the end of each phoneme. Further, the authors evaluated their algorithm by using 40 sentences from the TIMIT dataset, without excluding the SA sentences (see Section 2.1). This, we believe, might have introduced a bias in the reported detection rate making it too optimistic, since the SA sentences are same for all speakers in the dataset and must have appeared both in the training set and in the test set. The authors reported 83.81% recall rate for the fricative phoneme class and did not report the recall rate for the non-fricative class.
Ruinskiy & Lavner (2014) investigated the fricative phoneme detection by combining time domain and frequency domain features that were fed into a linear classifier. The detections were postprocessed by using median filtering to avoid short gaps within a sequence of positive detections, and to eliminate detections that were too short to be considered as whole phonemes. The authors applied majority voting for each phoneme and only evaluated their algorithm with unvoiced fricative phonemes (/s/, /sh/, /f/, /th/) and not with all 8 fricative phonemes from Table 1. In the evaluation stage, the authors did not exclude the SA sentences, which, as stated above, might have introduced a bias in the reported detection rate making it too optimistic. To the best of our knowledge, this reached the best fricative phoneme detection rate reported on the TIMIT dataset to date: 94.08% UAR.
The non-causal post-processing that was used in Vydana & Vuppala (2016) and Ruinskiy & Lavner (2014) might be suitable for some applications, such as for speech-to-text conversion. However it is not suitable for signal processing in hearing aids, because, as explained in Section 1 these devices must work in real time with minimal delay.
3.1.2 GENERIC PHONEME IDENTIFICATION
The generic phoneme identification task is similar to the fricative phoneme detection task, with the only difference that the aim is to identify all phonemes individually. This task is more challenging than the fricative phoneme detection since the number of classes is higher (total-number-ofphonemes-many classes vs. two classes).
It was widely believed that the processing of the input context forward and backward in time from the speech frame to be identified is crucial for success in phoneme identification (Graves & Schmidhuber, 2005). Hence, almost all of phoneme identification studies relied on a system where the speech frame to be identified was located in the middle of a larger input context used by the algorithm to make the decision. This comes with a delay in the phoneme identification. For example, if the input to the algorithm consists of N successive overlapping short speech frames (typically 25 ms long each with overlap of 10 ms) and the detection about the middle frame is made, a delay of (N − 1)/2 is accumulated, assuming N is odd. This is illustrated in Figure A.5.
Siniscalchi et al. (2011) and Yu et al. (2012) utilized a single hidden layer Multi Layer Perceptron (MLP) and a Deep Neural Network (DNN), respectively, where the input context had 11 frames and the frame in the middle was identified. Therefore, there were 5 frames delay in the identification. Unfortunately, the authors did not report the frame length in seconds. The authors achieved 93.17% and 96.2% recall rates for fricative phonemes on the Nov92 dataset. Siniscalchi et al. (2013) utilized a DNN, using 310 ms (total time span from Frame 1 to Frame N in Figure A.5) input context that was centered around the frame being detected leading to a 155 ms delay. The authors reported 95.4% recall rate for fricative phonemes on the Nov92 dataset. Chen et al. (2014) employed a DNN, where the input context consisted of 9 frames that was centered around the frame being detected. The authors reported 88.2% recall rate for fricative phonemes on the Switchboard part of the NIST 2000 Hub5 dataset. All these methods introduce significant detection delays and the reported rates are not directly comparable to ours because the rates are not provided for the TIMIT dataset. For convenience of the reader we assembled a summary of these results in Table B.1 in the appendix. Zhang et al. (2019); Chorowski et al. (2015); Tóth (2015) addressed the task of generic phoneme detection on TIMIT using neural networks, unfortunately a comparison with this work is not possible since no fricative-specific results are reported.
Vydana & Vuppala (2016) and Ruinskiy & Lavner (2014) aimed at fricative phoneme detection and evaluated their algorithm on the TIMIT dataset as in this work. Table 4 compares the reported results in these two approaches to ours. Ruinskiy & Lavner (2014) is a paper that is most closely related to ours, which calls for a detailed comparison provided next.
The authors of Ruinskiy & Lavner (2014) only attempted to detect unvoiced fricative phonemes: /s/, /sh/, /f/ and /th/ and applied majority voting in their evaluation. To make a fair comparison, we also tested our detection with unvoiced fricative phonemes only and also applied majority voting. Table 5 summarizes the results. A further comparison in terms of Receiver Operating Characteristic (ROC) curves between our results and those of Ruinskiy & Lavner (2014) is reported in Figure A.3 in Appendix.
Based on the table, we observed the following.
3.1.3 DETECTION OF /V/ AND /DH/
Our detection performance increased when fricatives /v/ and /dh/ were excluded from consideration as was done in Ruinskiy & Lavner (2014), since these two phonemes are voiced fricative phonemes. The reason for this is well-known: the fricative phonemes /v/ and /dh/ have a vowel-like structure, and can be easily confused with vowels, reducing Recallf . Figure 3 shows the resemblance of the fricative phonemes /v/ and /dh/ with the vowel phonemes /iy/ in beet and /ux/ in toot in timefrequency plain.
Figure 4 demonstrates the effect of this resemblance in the detections of our network: In Figure 4a, the posterior probability for fricativity of phoneme /dh/ is not as high as that for fricativity of phoneme /f/. Same problem exists for /v/ as compared to /z/ in Figure 4b.
3.1.4 MAJORITY VOTING
Majority voting in the evaluation makes a single decision for a phoneme based on all detections for the individual samples constituting that phoneme: if more than half of the samples in the phoneme are classified as fricative by the network, then the phoneme itself is classified as fricative.
Majority-voting-based post-processing introduces a delay equal to the duration of a phoneme making it useless for applications in hearing aids. We applied this post-processing to the detections of our network to make a direct and fair comparison with Ruinskiy & Lavner (2014). As expected, with the post-processing the performance increased to 96.41% UAR rate. This is about 39% decrease in error rate as compared to 94.08% UAR of Ruinskiy & Lavner (2014), a substantial improvement. We stress that even without any post-processing and evaluated with all 8 fricative phonemes our result (94.76% UAR rate) is still better than those of Ruinskiy & Lavner (2014) that does use postprocessing and evaluate only with unvoiced fricative phonemes, albeit by not as much.
4 COMPUTATIONAL CONSIDERATIONS
In this paper when we claim that our method introduces zero delay, we make a statistical statement: Detection for the current sample is based only on the past signal. So far, we ignored the computational aspect of the problem: Making inference using a neural network takes time for computation. This introduces a certain delay that depends primarily on how powerful the computer is, how large the neural network is, and how optimized the software is.
On the one hand, the processor in a hearing aid is much weaker than that in a modern powerful GPU. On the other hand, one can envision installing a dedicated low-power chip for neural network-based
inference into future hearing aids; modern phones already contain such chips. No matter what the computational infrastructure is, a smaller model is always more efficient. The basic model described in Table 2 consists of 25 convolutional layers; it is called Net25. We constructed two smaller models as follows. The model Net25h is obtained from Net25 by reducing the number of features in each layer by a factor of two (h stands for half). The model Net19 is obtained from Net25 by removing the Convolutional Stage 5; details of these networks are given in Figure A.2 in the Appendix for reference. Table 6 summarizes the detection rates of the three models, the number of parameters in each model, and the processing time per sample for the Keras implementation executed on Nvidia Quadro P1000 GPU. One can see that Net25h has more than three times fewer parameters as Net25, about 35% faster processing speed, and its UAR rate is about 6% worse than that of Net25.
Still, there will always be a nonzero time that is necessary to perform the computation. Since we are targeting zero delay, we must accommodate the extra time needed. A natural solution is to try to extrapolate our detections into the future. This is done in the next section.
5 EXTRAPOLATING DETECTIONS INTO THE FUTURE
In this section, we use our network to extrapolate fricative phoneme detections into the future. In Figure 5, the green highlighted segment represents the input to the network. The goal of the network is to use this input to calculate the probability that a sample (marked in orange) located a couple of ms in the future, beyond the right boundary of the green segment (marked in red), belongs to a fricative phoneme.
To accomplish this goal we retrain the network with the data that is relabeled accordingly. For example, the label for the segment highlighted in green in Figure 5 would be based on the identity of the orange sample, not on the identity of the red sample as was done previously.
We considered four different values for the extrapolation gap: detecting 1ms, 2ms, 3ms, and 4 ms ahead into the future. This corresponds to the time difference between the orange line (future) and the red line (present time) in Figure 5.
Except for the relabeling of the training set, the training is exactly the same as before. The inference is also unchanged, except that to make a detection for the sample at orange line, we fed to the network the green highlighted segment in Figure 5. Table 7 reports the rates for the fricative phoneme detection as a function of the extrapolation gap.
Table 7 clearly shows that the performance decays very gracefully with the increasing extrapolation gap. With the extrapolation gap of 1-2ms we already have enough computational time for inference. Even at 3ms extrapolation gap and solving a harder problem of detecting all 8 fricative phonemes our method still outperforms the method of Ruinskiy & Lavner (2014) in UAR.
6 CONCLUSION
In this work, we provided a state-of-the-art baseline for fricative phoneme detection and demonstrated that this problem may be solved with zero delay. Further, we demonstrated that it is feasible to extrapolate fricative phoneme detections into the future, creating sufficient computational time for inference. Among other things, these results are important for building the future generation of hearing aids. Our results are fully reproducible and we make the complete source code available. We also include all the trained models that were used in evaluations in this paper.
The following interesting research directions remain. Restructure the computational graph of the evaluation pipeline to take advantage of the time-series-like structure of the data and avoid redundant computations. Reduce the size of the network to make it more suitable for a low-power device (hearing aids). This may be achieved, for example, using deep learning compression techniques: Pruning, compression through quantization, knowledge transfer through distillation or teacher-student approach, using memory efficient models such as fire module or depth wise convolutions.
A SUPPLEMENTARY DETAILS
Table A.1: Phonemes in Figure 1.
PHONETIC CODE /ih/ /tcl/ /w/ /ix/ /z/ EXAMPLE WORD bit it way debit zone
Input [3072,1]
32 conv, 48
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 80
8 conv, 80
8 conv, 80
8 conv, 80
8 conv, 80
8 conv, 80
8 conv, 96
8 conv, 96
8 conv, 96
8 conv, 96
8 conv, 96
8 conv, 96
Global Average
Output
Convolution Stage 1
Convolution Stage 2
Convolution Stage 3
Convolution Stage 4
Convolution Stage 5
Batch normalization
Weight layer
ReLU non-linearity
Batch normalization
Weight layer
ReLU non-linearity
+
Figure A.1: Left: Detailed network architecture used in the main paper consists of 25 convolutional layers; it is called Net25. The dashed lines show the downsampling for residual connections and Global Average represents 1D Global Averaging. Right: Illustration of residual connection; weight layer represents the convolutional layer without non-linearity.
Apart from the main network displayed in Figure A.1, we considered smaller model sizes displayed in Figure A.2.
B DETECTING PHONEME BOUNDARIES
Our networks were optimized to solve the two class classification problem on the level of samples: does the current sample belong to a fricative phoneme or not? A related problem is to precisely identify the boundaries of fricative phonemes. A good metric to measure the quality of boundary identification has been proposed in Räsänen et al. (2009). The metric is called R-value and it is defined as follows:
Rval = 1− √ (1−R)2 +OS2 + ∣∣∣Recallfb−1−OS√ 2 ∣∣∣ 2 , OS = Recallfb Precisionfb − 1. (2)
where Recallfb and Precisionfb denote the recall and the precision in identifying the fricative boundary correctly. The boundary is identified correctly if it lies no further than thresholdb milliseconds from a fricative boundary in the ground truth data. Each ground truth boundary may only match a single boundary in the data.
Interesting work on identifying phoneme boundaries precisely (not specializing to fricative phonemes) has been done in Franke et al. (2016); Michel et al. (2017); Adi et al. (2016). In the table below we report the performance of Net25 on this task for fricative phonemes. We note that
our network was not trained for this task and the results do not seem to be state-of-the-art. We leave it to future work to identify and remove the dominant sources of errors in this task and to understand how critical the precise boundary detection is for the applications in hearing aid devices. | 1. What are the strengths and weaknesses of the proposed approach in terms of its novelty, significance, and applicability?
2. How does the paper's contribution compare to prior works in phoneme recognition and CNN applications?
3. Are there any concerns regarding the input segment size or computational considerations in the paper?
4. Does the paper provide sufficient motivation and discussion of the chosen architecture and features?
5. Could the authors improve the paper by addressing these concerns and submitting it to a more suitable conference? | Review | Review
## Updated review
I have read the rebuttal. I have some concerns about the new version of the paper: The addition of section 2.3 about the MFCCs is welcome but feels a bit out of place. The first part about the MFCC is interesting and relevant, but it could be in the introduction as a motivation. The second part about the "proprietary high-quality time-frequency filterbank" is not clear at all. Firstly, results are discussed so this part should be in the Evaluation section. Secondly, why using proprietary filterbanks and not the standard Mel filterbanks ?
Given that the rebuttal and the new version of the paper didn't address my major concerns, I am keeping my original rating.
## Original review
This paper presents an approach to detect fricative phoneme in speech with as little delay as possible, in the context of hearing aids improvement. The model is based on CNN and is trained to detect fricative given the past context. The model is evaluated in terms of recall and compared with recent published works. The results show that the proposed approach outperforms the baselines and yields state-of-the-art performance with no delay. The paper concludes with some analysis on the computational cost and draw possible future work.
This paper should be rejected for the following reasons:
- The novelty is very limited: this work applied a well-known architecture (CNN) to a common problem, phoneme recognition. This only novelty is the zero-delay constraint, which is probably not sufficient for ICLR.
- The significance is also limited given the very specialized application.
- Some references are missing (see below).
- The presented results are not very clear.
- The computational considerations section is interesting but is missing some important elements.
Detailed comments:
- The authors selected raw speech signal as input to the CNN, which is not trivial and should be motivated and discussed in the paper. For instance, using the standard features like Mel filterbanks or MFCC will introduce a delay as they are computed on an overlapping window of 25ms. Phoneme recognition using raw speech as input to a CNN has been presented before, the authors should cite [1] and [2] for instance.
- Table 4 is confusing as the first four lines are not actually evaluated on TIMIT, so I don't see the point of adding these numbers to the table, as they cannot be compared anyway. I would remove these four lines from the Table.
- In terms of previous works, phoneme recognition on TIMIT is a very popular task, and many others could be cited, such as [3-5].
- On the computation consideration, the analysis is interesting, but a discussion on the size (i.e. number of parameter) of the network is missing: one way to decrease computation time is to have a smaller network, which is in line with the application: hearing aids probably do not have gigabytes of ram available.
- Question about the network: the input segment seems to be of size 3072 samples, why ? any motivation for this particular input size ?
My review can seem to be a bit harsh, I actually enjoyed the paper, but I don't think ICLR is the right conference for it, and I would advise the authors to improve it and submit it to a speech conference.
References:
[1] Palaz, D., Magimai Doss, M. and Collobert, R.. "Estimating Phoneme Class Conditional Probabilities from Raw Speech Signal using Convolutional Neural Networks." Proceedings of Interspeech 2013.
[2] Zeghidour, N., Usunier, N., Kokkinos, I., Schaiz, T., Synnaeve, G., & Dupoux, E. "Learning filterbanks from raw speech for phone recognition". Proceedings of ICASSP 2018.
[3] Zhang, Ying, Mohammad Pezeshki, Philémon Brakel, Saizheng Zhang, Cesar Laurent, Yoshua Bengio, and Aaron Courville. "Towards end-to-end speech recognition with deep convolutional neural networks." arXiv preprint arXiv:1701.02720 (2017).
[4] Chorowski, Jan K., et al. "Attention-based models for speech recognition." Advances in neural information processing systems. 2015.
[5] Tóth, László. "Phone recognition with hierarchical convolutional deep maxout networks." EURASIP Journal on Audio, Speech, and Music Processing 2015.1 (2015): 25. |
ICLR | Title
FRICATIVE PHONEME DETECTION WITH ZERO DELAY
Abstract
People with high-frequency hearing loss rely on hearing aids that employ frequency lowering algorithms. These algorithms shift some of the sounds from the high frequency band to the lower frequency band where the sounds become more perceptible for the people with the condition. Fricative phonemes have an important part of their content concentrated in high frequency bands. It is important that the frequency lowering algorithm is activated exactly for the duration of a fricative phoneme, and kept off at all other times. Therefore, timely (with zero delay) and accurate fricative phoneme detection is a key problem for high quality hearing aids. In this paper we present a deep learning based fricative phoneme detection algorithm that has zero detection delay and achieves state-of-the-art fricative phoneme detection accuracy on the TIMIT Speech Corpus. All reported results are reproducible and come with easy to use code that could serve as a baseline for future research. 1 FRICATIVE PHONEMES AND HIGH-FREQUENCY HEARING LOSS High-frequency hearing loss is a particular type of hearing loss that is mostly age related. People with high-frequency hearing loss have difficulties with word recognition in conversations. This is often caused by the loss of discrimination or detection ability of their ear for high-frequency sounds. Fricative phonemes are particularly hard to recognize for the affected people because these phonemes contain high-frequency information that is crucial to be able to discriminate them. The problem can persist if the affected people wear modern hearing aids. Researchers suggested frequency lowering approaches to make fricative phonemes audible by the hearing impaired. These techniques shift the high frequency parts of these sounds into lower frequency bands. For example, Robinson et al. (2009) show that a frequency transposition technique leads to improved detection and recognition ability in high frequency hearing impaired people for the sounds /s/ and /z/, which are fricative sounds. In this paper, we use the standard phonetic codes when we refer to the fricative phonemes as listed in Table 1. In order to employ such techniques in hearing aids, one must be able to reliably detect fricative phonemes with zero delay. This means that the algorithm needs to be able to detect that a fricative phoneme is coming next, before this has happened, so that the necessary alterations to the sound can be made in real time. In this paper we show how to use deep learning to solve this problem. Table 1: Fricative phonemes of English. PHONETIC CODE /s/ /sh/ /f/ /th/ /z/ /zh/ /v/ /dh/ EXAMPLE WORD sea she fin thin zone azure van then 2 DATASET, NETWORK ARCHITECTURE, AND TRAINING We employ a fully convolutional neural network with residual connections (He et al., 2016). The network takes raw speech segments as inputs, and calculates the posterior probability for the event that the sample at the end of the segment belongs to a fricative phoneme. The detection procedure is explained in Figure 1 where the raw speech segment is in blue, the input segment of the network is highlighted in green and the dashed green lines show the borders of the phonemes as marked in the TIMIT dataset (Garofolo et al., 1993). Based on the input segment, the network detects the identity
1 FRICATIVE PHONEMES AND HIGH-FREQUENCY HEARING LOSS
High-frequency hearing loss is a particular type of hearing loss that is mostly age related. People with high-frequency hearing loss have difficulties with word recognition in conversations. This is often caused by the loss of discrimination or detection ability of their ear for high-frequency sounds. Fricative phonemes are particularly hard to recognize for the affected people because these phonemes contain high-frequency information that is crucial to be able to discriminate them. The problem can persist if the affected people wear modern hearing aids. Researchers suggested frequency lowering approaches to make fricative phonemes audible by the hearing impaired. These techniques shift the high frequency parts of these sounds into lower frequency bands. For example, Robinson et al. (2009) show that a frequency transposition technique leads to improved detection and recognition ability in high frequency hearing impaired people for the sounds /s/ and /z/, which are fricative sounds. In this paper, we use the standard phonetic codes when we refer to the fricative phonemes as listed in Table 1. In order to employ such techniques in hearing aids, one must be able to reliably detect fricative phonemes with zero delay. This means that the algorithm needs to be able to detect that a fricative phoneme is coming next, before this has happened, so that the necessary alterations to the sound can be made in real time. In this paper we show how to use deep learning to solve this problem.
2 DATASET, NETWORK ARCHITECTURE, AND TRAINING
We employ a fully convolutional neural network with residual connections (He et al., 2016). The network takes raw speech segments as inputs, and calculates the posterior probability for the event that the sample at the end of the segment belongs to a fricative phoneme. The detection procedure is explained in Figure 1 where the raw speech segment is in blue, the input segment of the network is highlighted in green and the dashed green lines show the borders of the phonemes as marked in the TIMIT dataset (Garofolo et al., 1993). Based on the input segment, the network detects the identity
(belongs to a fricative phoneme or not) of the sound sample represented with the red vertical line at the end of the segment.
It is important to stress that the red line is at the end of the green segment, and not in the middle of the green segment. This means that we are not using non-causal information to detect the identity of the red sample. In other words, our method introduces zero statistical delay.
Observe that in Figure 1 the green segment spans four phonemes: /tcl/, /w/, /ix/, /z/ (see Table A.1 for an explanation of these phonetic codes). The red sample belongs to the phoneme /z/, which is a fricative phoneme, so that the correct detection in this case is fricative. Note that the fricative phoneme /z/ is barely contained in the green segment. Essentially, the network must learn to make accurate inferences about future phonemes based on the currently observed phonemes. Statistically, this is harder than making a detection about, for example, the middle point of the segment: think about the difference between interpolation and extrapolation.
2.1 DATASET AND DATA PREPROCESSING
The TIMIT Speech Corpus consists of raw speech sentences (at 16 kHz sampling rate) and their corresponding time aligned phonetic labels (Garofolo et al., 1993) as illustrated in Figure 1. It contains a total of 6300 sentences, 10 sentences spoken by each of 630 speakers from 8 major dialect regions of the United States. We used the entire training set of the TIMIT dataset for training (462 speakers), and the core test set for testing (24 speakers). We created the validation set (50 speakers) following the methodology in Halberstadt (1998).
In the TIMIT dataset, there are two sentences that are read by all the speakers; these are called SA sentences. We excluded these two sentences from all of the speakers in all sets (training, validation and test) to prevent a possible bias in the evaluation. Specifically, if SA sentences are not excluded, the model might conceivably memorize the locations of the phonemes in these sentences using the training set, and then make near perfect detection for these sentences in the test set. This would make the model appear to perform better than it does in reality. After the exclusion of SA sentences, there were 8 sentences per speaker left.
To keep the training and validation set balanced, we ensured a (approximately) 50/50 fricative-tonon-fricative sample ratio. To do this, we extracted 16 randomly placed 192 ms (3072 in samples) long speech segments from each sentence: 8 segments labeled as fricative phoneme and 8 segments labeled as non-fricative phoneme. Recall that the label for a segment is determined based on the identity of the last sample of the segment. If a sentence did not have a fricative phoneme sample, we generated 16 non-fricative segments; this procedure did not affect the (approximate) 50/50 fricativeto-non-fricative sample ratio, since there were not many sentences in training and validation set that had no fricative phoneme sample in them.
For each epoch, our training set consisted of 59136 (462 speakers x 8 sentences x 16 random segments) speech segments placed randomly as explained above. One such segment is highlighted in green in Figure 1. From epoch to epoch the training set of 59136 segments was randomly regenerated from the training set of the TIMIT dataset.
We applied the same procedure to form our validation set, but we kept it fixed for different training runs and also for different epochs in same training run. Our validation samples consisted of 6400 (50 speakers x 8 sentences x 16 segments) speech segments.
For each segment x = [x1, . . . , x3072] from training, validation and test sets, we applied standard deviation normalization x← x/ √ Var(x), where Var(x) is the variance of the segment.
2.2 NETWORK ARCHITECTURE
Table 2 shows the layer structure of the model we used. The input layer of the network takes raw speech segments of 3072 samples (192 ms at 16 kHz sampling rate). The input layer is followed by 5 stages. Stage 1 consists of one convolutional layer, stages 2–5 consist of 6 convolutional layers each. For each convolutional layer, in Table 2, the first number in the square brackets indicates the kernel size of the layer, the number after “/” indicates the stride value in the convolution, and the last number indicates the number of filters used in the layer. The strides are only applied to the first convolutional layer of each stage. All of the layers have ReLU non-linearity, except the single neuron at the output layer that has the sigmoid non-linearity. In stages 2–5 we used residual connections (He et al., 2016). If the filter dimensions do not match between two layers for residual connection, we applied zero padding to match the filter dimensions (some residual connections had to be applied among two layers that had different number of filters, since we increased the number of filters as we went deeper in the model). The residual connections were applied around pairs of convolutional layers, connecting the input to the first layer of the pair to the output of the linear filter of the second layer of the pair, immediately before the non-linearity of the second layer. After the convolutional stages, 1D global average pooling layer was used. The total number of parameters of this model is 1.1 million. See Figure A.1 in Appendix for details of the model.
2.3 ALTERNATIVE ARCHITECTURES
We experimented widely with the network architecture: the input segment size, the number of layers, the number of features. The network in Table 2 achieved the best performance in our experiments. As can be seen in Figure 2 below, the input size of 3072 samples corresponds to 2-3 phonemes. Intuitively, this provides abundant information to infer the phoneme that follows. Performance of more shallow networks and networks with fewer features is discussed in Section 4.
Our network receives the raw speech signal as input. The use of raw waveform (see also Palaz et al. (2013); Zeghidour et al. (2017)) enables us to learn problem-specific low-level features. Nevertheless, in audio processing it is more common to first transform the raw signal into a time-frequency representation, for example, by computing the Mel-Frequency Cepstrum Coefficients (MFCC), and then to feed the resulting features into a classifier. Because of the time-frequency uncertainty principle, the computation of MFCC features requires a window of certain size, often 25 ms. To compensate for the delay incurred by processing the signal in the window, the classifier must learn to extrapolate detections into the future. While this is possible, as explained in Section 5 below, in general it is a harder task that to make predictions about the present sample. In a separate line of work, we used a proprietary high-quality time-frequency filterbank, followed by a recurrent neural network, to solve the fricative detection problem. After extensive experimentation, we were able to get within a few percent of the performance reported in this paper, however, we were not able to surpass the performance of the model in Table 2. The advantage of the time-frequency transformation followed by a recurrent neural network, is that that approach leads to a much more computationally efficient implementation. The value of this work is to push the boundary of what is possible in terms of detection accuracy. We plan to report the somewhat inferior but computationally more efficient results in a separate publication.
2.4 TRAINING
We used binary crossentropy loss and Adam optimizer starting with 0.001 learning rate. We halved the learning rate if the validation loss did not decrease within 10 epochs, and we applied early stop-
ping if the loss computed on the validation set did not decrease within 40 epochs. We applied 0.0001 l2 weight decay to all of the weights in the convolutional layers, and applied batch normalization after each linear filtering operation and before each non-linearity. See Figure A.1 in Appendix for details on batch normalization. For the results reported in Tables 3, 4 and 5 and Figures A.3 and A.4 we trained the network 10 times and selected the network that performed best on the validation set; for the results reported in Table 7 we trained the network once only.
3 EVALUATION
We evaluated the performance of our detection algorithm with the test set (the core test of the TIMIT dataset) that contains 192 sentences with 24 different speakers (each speaker has 8 sentences since we excluded two SA sentences as explained in Section 2.1). The total duration of the test set is 545 seconds.
Figure 2 explains how we use the trained network to make inferences. The raw speech is displayed in blue in the top display. We take a sliding window that consists of 3072 samples (192 ms), represented in black in the top display. The window is moved from left to right, one sample at a time. For each location of the sliding window, we apply the network to the speech segment marked by the window. This gives us the posterior probability for the event that the sample at the end of the sliding window belongs to a fricative phoneme. The posterior probabilities are displayed in blue in the bottom display.
When we compare the network’s detections with the ground truth displayed in orange, we observe that the network was able to detect the beginning and the end of the fricative phoneme regions accurately, even though it did not have access to the future context.
To make the final decision based on the posterior probabilities produced by the network, we apply thresholding as follows:
Decision = { Fricative, if p(x) > threshold Non-fricative, otherwise.
(1)
Above, p(x) (the blue line in the lower display in Figure 2) is the posterior probability for the event that the last sample of x (segment in black box in the upper display in Figure 2) belongs to a fricative phoneme. We used the validation set to decide a threshold that maximized the Unweighted Average Recall (UAR) of our network. UAR is the average recall of the two classes (fricative and non-fricative phonemes):
UAR = Recallf +Recalln
2 , Recallf =
TP
TP + FN , Recalln =
TN
TN + FP .
Above, TP , TN , FP , and FN is the number of true positive, true negative, false positive, and false negative detections of fricative phonemes, respectively; Recallf and Recalln is the recall rate for fricative and non-fricative phonemes, respectively. We used UAR as our metric since the test set was highly unbalanced (see # of Samples column in Table 3).
We also report the precision rates and the F1-scores (i ∈ {f, n}):
Precisionf = TP
TP + FP , Precisionn =
TN
TN + FN , F1,i = 2× Precisioni ×Recalli Precisioni +Recalli .
Table 3 summarizes the detection results with the selected threshold (0.42). Overall 94.76% UAR was achieved. To the best of our knowledge, this is the best UAR for fricative phoneme detection on the TIMIT dataset.
3.1 COMPARISON WITH THE LITERATURE
In this section we compared our algorithm with other results in phoneme detection literature. We first summarized the literature that focused only on fricatives phoneme detection (as in this work), then we briefly summarized the literature that focused on generic phoneme identification. Finally, we summarized all the results in one table.
3.1.1 FRICATIVE PHONEME DETECTION
Fricative phoneme detection is the task of detecting all occurrences of fricative phonemes in continuous speech. Vydana & Vuppala (2016) investigated the fricative phoneme detection using a combination of S-Transform and Linear Prediction Coefficients (LPC) approach. In their evaluation, the authors used a post-processing rule to remove spurious detections: if the duration of the detected evidence was shorter than 10 ms, then this evidence was not considered as fricative phoneme. The authors applied majority voting to each phoneme, so that a single detection is made only at the end of each phoneme. Further, the authors evaluated their algorithm by using 40 sentences from the TIMIT dataset, without excluding the SA sentences (see Section 2.1). This, we believe, might have introduced a bias in the reported detection rate making it too optimistic, since the SA sentences are same for all speakers in the dataset and must have appeared both in the training set and in the test set. The authors reported 83.81% recall rate for the fricative phoneme class and did not report the recall rate for the non-fricative class.
Ruinskiy & Lavner (2014) investigated the fricative phoneme detection by combining time domain and frequency domain features that were fed into a linear classifier. The detections were postprocessed by using median filtering to avoid short gaps within a sequence of positive detections, and to eliminate detections that were too short to be considered as whole phonemes. The authors applied majority voting for each phoneme and only evaluated their algorithm with unvoiced fricative phonemes (/s/, /sh/, /f/, /th/) and not with all 8 fricative phonemes from Table 1. In the evaluation stage, the authors did not exclude the SA sentences, which, as stated above, might have introduced a bias in the reported detection rate making it too optimistic. To the best of our knowledge, this reached the best fricative phoneme detection rate reported on the TIMIT dataset to date: 94.08% UAR.
The non-causal post-processing that was used in Vydana & Vuppala (2016) and Ruinskiy & Lavner (2014) might be suitable for some applications, such as for speech-to-text conversion. However it is not suitable for signal processing in hearing aids, because, as explained in Section 1 these devices must work in real time with minimal delay.
3.1.2 GENERIC PHONEME IDENTIFICATION
The generic phoneme identification task is similar to the fricative phoneme detection task, with the only difference that the aim is to identify all phonemes individually. This task is more challenging than the fricative phoneme detection since the number of classes is higher (total-number-ofphonemes-many classes vs. two classes).
It was widely believed that the processing of the input context forward and backward in time from the speech frame to be identified is crucial for success in phoneme identification (Graves & Schmidhuber, 2005). Hence, almost all of phoneme identification studies relied on a system where the speech frame to be identified was located in the middle of a larger input context used by the algorithm to make the decision. This comes with a delay in the phoneme identification. For example, if the input to the algorithm consists of N successive overlapping short speech frames (typically 25 ms long each with overlap of 10 ms) and the detection about the middle frame is made, a delay of (N − 1)/2 is accumulated, assuming N is odd. This is illustrated in Figure A.5.
Siniscalchi et al. (2011) and Yu et al. (2012) utilized a single hidden layer Multi Layer Perceptron (MLP) and a Deep Neural Network (DNN), respectively, where the input context had 11 frames and the frame in the middle was identified. Therefore, there were 5 frames delay in the identification. Unfortunately, the authors did not report the frame length in seconds. The authors achieved 93.17% and 96.2% recall rates for fricative phonemes on the Nov92 dataset. Siniscalchi et al. (2013) utilized a DNN, using 310 ms (total time span from Frame 1 to Frame N in Figure A.5) input context that was centered around the frame being detected leading to a 155 ms delay. The authors reported 95.4% recall rate for fricative phonemes on the Nov92 dataset. Chen et al. (2014) employed a DNN, where the input context consisted of 9 frames that was centered around the frame being detected. The authors reported 88.2% recall rate for fricative phonemes on the Switchboard part of the NIST 2000 Hub5 dataset. All these methods introduce significant detection delays and the reported rates are not directly comparable to ours because the rates are not provided for the TIMIT dataset. For convenience of the reader we assembled a summary of these results in Table B.1 in the appendix. Zhang et al. (2019); Chorowski et al. (2015); Tóth (2015) addressed the task of generic phoneme detection on TIMIT using neural networks, unfortunately a comparison with this work is not possible since no fricative-specific results are reported.
Vydana & Vuppala (2016) and Ruinskiy & Lavner (2014) aimed at fricative phoneme detection and evaluated their algorithm on the TIMIT dataset as in this work. Table 4 compares the reported results in these two approaches to ours. Ruinskiy & Lavner (2014) is a paper that is most closely related to ours, which calls for a detailed comparison provided next.
The authors of Ruinskiy & Lavner (2014) only attempted to detect unvoiced fricative phonemes: /s/, /sh/, /f/ and /th/ and applied majority voting in their evaluation. To make a fair comparison, we also tested our detection with unvoiced fricative phonemes only and also applied majority voting. Table 5 summarizes the results. A further comparison in terms of Receiver Operating Characteristic (ROC) curves between our results and those of Ruinskiy & Lavner (2014) is reported in Figure A.3 in Appendix.
Based on the table, we observed the following.
3.1.3 DETECTION OF /V/ AND /DH/
Our detection performance increased when fricatives /v/ and /dh/ were excluded from consideration as was done in Ruinskiy & Lavner (2014), since these two phonemes are voiced fricative phonemes. The reason for this is well-known: the fricative phonemes /v/ and /dh/ have a vowel-like structure, and can be easily confused with vowels, reducing Recallf . Figure 3 shows the resemblance of the fricative phonemes /v/ and /dh/ with the vowel phonemes /iy/ in beet and /ux/ in toot in timefrequency plain.
Figure 4 demonstrates the effect of this resemblance in the detections of our network: In Figure 4a, the posterior probability for fricativity of phoneme /dh/ is not as high as that for fricativity of phoneme /f/. Same problem exists for /v/ as compared to /z/ in Figure 4b.
3.1.4 MAJORITY VOTING
Majority voting in the evaluation makes a single decision for a phoneme based on all detections for the individual samples constituting that phoneme: if more than half of the samples in the phoneme are classified as fricative by the network, then the phoneme itself is classified as fricative.
Majority-voting-based post-processing introduces a delay equal to the duration of a phoneme making it useless for applications in hearing aids. We applied this post-processing to the detections of our network to make a direct and fair comparison with Ruinskiy & Lavner (2014). As expected, with the post-processing the performance increased to 96.41% UAR rate. This is about 39% decrease in error rate as compared to 94.08% UAR of Ruinskiy & Lavner (2014), a substantial improvement. We stress that even without any post-processing and evaluated with all 8 fricative phonemes our result (94.76% UAR rate) is still better than those of Ruinskiy & Lavner (2014) that does use postprocessing and evaluate only with unvoiced fricative phonemes, albeit by not as much.
4 COMPUTATIONAL CONSIDERATIONS
In this paper when we claim that our method introduces zero delay, we make a statistical statement: Detection for the current sample is based only on the past signal. So far, we ignored the computational aspect of the problem: Making inference using a neural network takes time for computation. This introduces a certain delay that depends primarily on how powerful the computer is, how large the neural network is, and how optimized the software is.
On the one hand, the processor in a hearing aid is much weaker than that in a modern powerful GPU. On the other hand, one can envision installing a dedicated low-power chip for neural network-based
inference into future hearing aids; modern phones already contain such chips. No matter what the computational infrastructure is, a smaller model is always more efficient. The basic model described in Table 2 consists of 25 convolutional layers; it is called Net25. We constructed two smaller models as follows. The model Net25h is obtained from Net25 by reducing the number of features in each layer by a factor of two (h stands for half). The model Net19 is obtained from Net25 by removing the Convolutional Stage 5; details of these networks are given in Figure A.2 in the Appendix for reference. Table 6 summarizes the detection rates of the three models, the number of parameters in each model, and the processing time per sample for the Keras implementation executed on Nvidia Quadro P1000 GPU. One can see that Net25h has more than three times fewer parameters as Net25, about 35% faster processing speed, and its UAR rate is about 6% worse than that of Net25.
Still, there will always be a nonzero time that is necessary to perform the computation. Since we are targeting zero delay, we must accommodate the extra time needed. A natural solution is to try to extrapolate our detections into the future. This is done in the next section.
5 EXTRAPOLATING DETECTIONS INTO THE FUTURE
In this section, we use our network to extrapolate fricative phoneme detections into the future. In Figure 5, the green highlighted segment represents the input to the network. The goal of the network is to use this input to calculate the probability that a sample (marked in orange) located a couple of ms in the future, beyond the right boundary of the green segment (marked in red), belongs to a fricative phoneme.
To accomplish this goal we retrain the network with the data that is relabeled accordingly. For example, the label for the segment highlighted in green in Figure 5 would be based on the identity of the orange sample, not on the identity of the red sample as was done previously.
We considered four different values for the extrapolation gap: detecting 1ms, 2ms, 3ms, and 4 ms ahead into the future. This corresponds to the time difference between the orange line (future) and the red line (present time) in Figure 5.
Except for the relabeling of the training set, the training is exactly the same as before. The inference is also unchanged, except that to make a detection for the sample at orange line, we fed to the network the green highlighted segment in Figure 5. Table 7 reports the rates for the fricative phoneme detection as a function of the extrapolation gap.
Table 7 clearly shows that the performance decays very gracefully with the increasing extrapolation gap. With the extrapolation gap of 1-2ms we already have enough computational time for inference. Even at 3ms extrapolation gap and solving a harder problem of detecting all 8 fricative phonemes our method still outperforms the method of Ruinskiy & Lavner (2014) in UAR.
6 CONCLUSION
In this work, we provided a state-of-the-art baseline for fricative phoneme detection and demonstrated that this problem may be solved with zero delay. Further, we demonstrated that it is feasible to extrapolate fricative phoneme detections into the future, creating sufficient computational time for inference. Among other things, these results are important for building the future generation of hearing aids. Our results are fully reproducible and we make the complete source code available. We also include all the trained models that were used in evaluations in this paper.
The following interesting research directions remain. Restructure the computational graph of the evaluation pipeline to take advantage of the time-series-like structure of the data and avoid redundant computations. Reduce the size of the network to make it more suitable for a low-power device (hearing aids). This may be achieved, for example, using deep learning compression techniques: Pruning, compression through quantization, knowledge transfer through distillation or teacher-student approach, using memory efficient models such as fire module or depth wise convolutions.
A SUPPLEMENTARY DETAILS
Table A.1: Phonemes in Figure 1.
PHONETIC CODE /ih/ /tcl/ /w/ /ix/ /z/ EXAMPLE WORD bit it way debit zone
Input [3072,1]
32 conv, 48
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 64
8 conv, 80
8 conv, 80
8 conv, 80
8 conv, 80
8 conv, 80
8 conv, 80
8 conv, 96
8 conv, 96
8 conv, 96
8 conv, 96
8 conv, 96
8 conv, 96
Global Average
Output
Convolution Stage 1
Convolution Stage 2
Convolution Stage 3
Convolution Stage 4
Convolution Stage 5
Batch normalization
Weight layer
ReLU non-linearity
Batch normalization
Weight layer
ReLU non-linearity
+
Figure A.1: Left: Detailed network architecture used in the main paper consists of 25 convolutional layers; it is called Net25. The dashed lines show the downsampling for residual connections and Global Average represents 1D Global Averaging. Right: Illustration of residual connection; weight layer represents the convolutional layer without non-linearity.
Apart from the main network displayed in Figure A.1, we considered smaller model sizes displayed in Figure A.2.
B DETECTING PHONEME BOUNDARIES
Our networks were optimized to solve the two class classification problem on the level of samples: does the current sample belong to a fricative phoneme or not? A related problem is to precisely identify the boundaries of fricative phonemes. A good metric to measure the quality of boundary identification has been proposed in Räsänen et al. (2009). The metric is called R-value and it is defined as follows:
Rval = 1− √ (1−R)2 +OS2 + ∣∣∣Recallfb−1−OS√ 2 ∣∣∣ 2 , OS = Recallfb Precisionfb − 1. (2)
where Recallfb and Precisionfb denote the recall and the precision in identifying the fricative boundary correctly. The boundary is identified correctly if it lies no further than thresholdb milliseconds from a fricative boundary in the ground truth data. Each ground truth boundary may only match a single boundary in the data.
Interesting work on identifying phoneme boundaries precisely (not specializing to fricative phonemes) has been done in Franke et al. (2016); Michel et al. (2017); Adi et al. (2016). In the table below we report the performance of Net25 on this task for fricative phonemes. We note that
our network was not trained for this task and the results do not seem to be state-of-the-art. We leave it to future work to identify and remove the dominant sources of errors in this task and to understand how critical the precise boundary detection is for the applications in hearing aid devices. | 1. What is the main contribution of the paper in terms of applying supervised deep learning methods?
2. What are the challenges addressed in the paper, and how does the proposed approach overcome them?
3. Can you provide more insights into the working mechanism of the deep convolutional neural network used in the paper?
4. How does the proposed approach compare to existing works in terms of accuracy and computational cost?
5. What are the limitations of the paper, and how could the method be improved further? | Review | Review
This paper apples supervised deep learning methods to detect exact duration of a fricative phoneme in order to improve practical frequency lowering algorithm. A major challenge compared to existing work is to have an algorithm with nearly zero delay while preserving detection accuracy. A deep convolutional neural network is trained for this purpose and it is validated on TIMIT dataset.
After a careful preprocessing of the data, long segments of raw audio are given as input to the convolutional net. It is trained as a binary classification problem. Therefore, for each different phenome, a different network is needed. To improve the accuracy, Majority voting is also adopted. This however increases the computational cost. To address this issue, an extrapolation detection problem is formulated to predict the fricative phoneme a few ms in advance. Extensive numerical results show that the approach still outperforms the method of Ruinskiy & Lavner (2014) in Unweighted Average Recall.
I find the accuracy attained by the neural nets quite impressive, although more insights would be favored to understand what is going on. This is yet an interesting application of deep learning useful for real-life problems. If the method could be tested on another dataset, the result would be more convincing. |
ICLR | Title
On the Critical Role of Conventions in Adaptive Human-AI Collaboration
Abstract
Humans can quickly adapt to new partners in collaborative tasks (e.g. playing basketball), because they understand which fundamental skills of the task (e.g. how to dribble, how to shoot) carry over across new partners. Humans can also quickly adapt to similar tasks with the same partners by carrying over conventions that they have developed (e.g. raising hand signals pass the ball), without learning to coordinate from scratch. To collaborate seamlessly with humans, AI agents should adapt quickly to new partners and new tasks as well. However, current approaches have not attempted to distinguish between the complexities intrinsic to a task and the conventions used by a partner, and more generally there has been little focus on leveraging conventions for adapting to new settings. In this work, we propose a learning framework that teases apart rule-dependent representation from convention-dependent representation in a principled way. We show that, under some assumptions, our rule-dependent representation is a sufficient statistic of the distribution over best-response strategies across partners. Using this separation of representations, our agents are able to adapt quickly to new partners, and to coordinate with old partners on new tasks in a zero-shot manner. We experimentally validate our approach on three collaborative tasks varying in complexity: a contextual multi-armed bandit, a block placing task, and the card game Hanabi.
1 INTRODUCTION
Humans collaborate well together in complex tasks by adapting to each other through repeated interactions. What emerges from these repeated interactions is shared knowledge about the interaction history. We intuitively refer to this shared knowledge as conventions. Convention formation helps explain why teammates collaborate better than groups of strangers, and why friends develop lingo incomprehensible to outsiders. The notion of conventions has been studied in language (Clark & Wilkes-Gibbs, 1986; Clark, 1996; Hawkins et al., 2017; Khani et al., 2018) and also alluded to in more general multiagent collaborative tasks (Boutilier, 1996; Stone et al., 2010; Foerster et al., 2019; Carroll et al., 2019; Lerer & Peysakhovich, 2019; Hu et al., 2020). For example, Foerster et al. (2019) trained agents to play the card game Hanabi, and noted the emergent convention that “hinting for red or yellow indicates that the newest card of the other player is playable”.
One established characterization of a convention that is commonly used (Boutilier, 1996; Hawkins et al., 2017; Lerer & Peysakhovich, 2019) is an arbitrary solution to a recurring coordination problem (Lewis, 1969). A convention is thus one of many possible solutions that a group of partners happens to converge to. This is in contrast to problem solutions that are enforced by the rule constraints, and would have arisen no matter how the partners collaborated and what behavior they converged to. Success in a collaborative task typically involves learning both types of knowledge, which we will refer to as convention-dependent and rule-dependent behavior. The distinction between these two types of behavior has been studied extensively in the linguistic literature (Franke et al.; Brochhagen, 2020). In this work, we focus on the less-explored setting of implicit communication, and provide a concrete approach to learn representations for these two different types of behavior.
In the context of multi-agent or human-AI collaboration, we would like our AI agents to adapt quickly to new partners. AI agents should be able to flexibly adjust their partner-specific convention-
dependent behavior while reusing the same rule-dependent behavior to simplify the learning problem – just like how humans quickly adapt when playing basketball with a new friend, without the need to re-learn the rules of the game. We would also like our AI agents to coordinate well on similar tasks when paired with the same partners – just like how humans can coordinate better when playing a new sport but with an old friend.
Although many existing works similarly recognize conventions as an important factor of successful collaboration, they do not focus on separating conventions from rules of the task. This means that adapting to a new partner can be as difficult as learning a new task altogether. For example, existing techniques that emphasize modeling the partner’s policy, such as theory of mind (Simon, 1995; Baker et al., 2017; Brooks & Szafir, 2019) or multi-agent reinforcement learning (Foerster et al., 2018), attempt to model everything about the agent’s belief of the partner’s state and policies. Such belief modeling approaches very quickly become computationally intractable, as opposed to solely focusing on the relevant conventions developed with a partner.
To address the challenges above, we propose a framework that explicitly separates conventiondependent representations and rule-dependent representations through repeated interactions with multiple partners. After many rounds of solving a task (e.g. playing basketball) with different partners, an AI agent can learn to distinguish between conventions formed with a specific partner (e.g. pointing down signals bounce pass) and intrinsic complexities of the task (e.g. dribbling). This enables us to leverage the representations separately for fast adaptation to new interactive settings.
In the rest of this paper, we formalize the problem setting, and describe the underlying model of partners and tasks. Next, we present our framework for learning separations between rule and convention representations. We show that, under some conditions, our rule representation learns a sufficient statistic of the distribution over best-response strategies. We then run a study on humanhuman interactions to test if our hypothesis – that partners can carry over the same conventions across tasks – indeed holds for human subjects. Finally, we show the merits of our method on 3 collaborative tasks: contextual bandit, block placing, and a small scale version of Hanabi (Bard et al., 2020).
2 RELATED WORK
Convention formation has been studied under the form of iterated reference games (Hawkins et al., 2017; 2019), and language emergence (Mordatch & Abbeel, 2018; Lazaridou et al., 2017). In these works, partners learn to reference objects more efficiently by forming conventions through repeated interactions. But in these tasks, there is little intrinsic task difficulty beyond breaking symmetries.
In multi-agent reinforcement learning, techniques such as self-play, cross-training, or opponent learning (Foerster et al., 2018) have been used to observe emergent behavior in complex, physical settings (Nikolaidis & Shah, 2013; Liu et al., 2019; Baker et al., 2020). In addition, convention formation has been shown in tasks like Hanabi (Foerster et al., 2019; Hu & Foerster, 2019), Overcooked (Carroll et al., 2019; Wang et al., 2020), and negotiation tasks (Cao et al., 2018), where agents learn both how to solve a task and how to coordinate with a partner. But, these works qualitatively analyze emergent conventions through post-hoc inspection of the learned policies, and do not learn representations
that separate conventions from rule-dependent behavior. This makes it difficult to leverage learned conventions when adapting to new partners. More recently, an approach was proposed to design agents that avoid relying on conventions altogether; instead, the agents aim for unambiguous but potentially sub-optimal strategies (Hu et al., 2020).
Other approaches for studying conventions include game theory (Lerer & Peysakhovich, 2019) and theory of mind (Simon, 1995; Rabinowitz et al., 2018). Pragmatic reasoning (Goodman & Frank, 2016) is another framework that aims to estimate beliefs over the partner’s states and policies, but all these frameworks that rely on modelling beliefs quickly become intractable. Instead of approximating these belief modelling frameworks (Monroe & Potts, 2015), we instead focus on learning representations for rules and conventions. Our work shares similarities with the paradigms of meta-learning (Schmidhuber, 1987; Finn et al., 2017) or multi-task learning (Caruana, 1997), if we view collaboration with different partners as new but related tasks. However, we are also interested in how conventions persist across tasks with similar symmetries.
More related to our work is perhaps modular policy networks (Devin et al., 2017), which learns policy modules and composes them together to transfer across robot arms with different degrees of freedom and across different tasks. However, their approach relies on input cleanly split between background observations of the task, and robot’s internal observations. We do not assume such a split in the input; our approach tries to learn the separation between rules and partners with one input channel.
3 PRELIMINARIES
We begin by formally defining our problem setting as a two-player Markov Decision Process (MDP).
Two-Player MDP We consider a two-agent MDP with identical payoffs, which is a tuple {S, {Ae, Ap}, P,R}. S is a set of states, Ae is a set of actions for agent e, Ap is a set of actions for agent p. In general, e represents the ego agent (that we control), and p represents agents who partner with e. P : S × Ae × Ap × S → [0, 1] is the probability of reaching a state given the current state and actions of all agents, and R : S × S → R is the real-valued reward function. Since we are interested in repeated interactions, we consider finite-horizon MDPs. A policy π is a stochastic mapping from a state to an action. For our setting, our policy π = (πe, πp) has two parts: πe : S → Ae and πp : S → Ap, mapping the state into actions for agent e and p respectively. We also consider collaborative tasks that are partially observable in our experiments. In addition, some tasks are turn-based, which can be handled by including the turn information as part of the state, and ignoring the other player’s actions. Here, we focus on tasks with discrete state and action spaces.
Running Example: Collaborative Contextual Bandit We describe a collaborative task to ground our discussion around adapting to new partners, and adapting to new tasks with old partners. Consider a contextual bandit setting with 2 contexts, 4 actions, and a prize for each action. The two players each independently pick an arm to pull, and they score prize(a) points if they both picked the same arm a; otherwise, they score 0 points. An example is depicted in Figure 2, where green boxes represent arms with prize 1, the rest have prize 0, and red and blue circles show the player’s actions.
In Task 1 of Figure 2, context A only has one good arm a1 while the context B has two good arms a2 and a4. After repeated rounds of playing, two agents can eventually converge to coordinating on a convention that chooses one of the two good arms in context B (e.g. selecting the leftmost good arm a2). When the task shifts slightly as shown in Task 2 but context B remains the same across the two
tasks, we can reasonably expect the partners to adhere to the same convention they developed for context B of Task 1 when playing context B of Task 2.
There are two axes of generalization at play. First, for a fixed task we would like to learn the underlying structure of the task (e.g. green arm locations) to quickly adapt to a new partner when playing the same task. Second, for a fixed partner we would like to keep track of developed conventions to quickly coordinate with this partner on a new task. In the next sections, we define the notion of a new task and a new partner, and propose a framework to effectively generalize across both dimensions.
4 MODEL OF PARTNERS AND TASKS
We consider a family of tasks that share the same state space, action space, and transition dynamics (all of which we refer to as the domain), but can differ in the reward function of the underlying MDP. We also consider a family of partners that our ego agent would like to coordinate with. The underlying model governing the actions taken by a partner at a state of a given task can be seen in Figure 3. For a new task, a new reward function is sampled from θR, while the domain of the MDP is fixed. For a new partner, we sample from θρ a new function ρ, which maps the Q-function at a state of the task to an action. We refer to ρ as the convention of a partner. In other words, the action aits of a partner i at state s of task t depends on the partner’s conventions and (indirectly through the Q-function at state s) the reward of the task. θρ is the distribution over conventions and can correspond to, for example, the distribution over emergent conventions from AI-agents trained from selfplay, or can be derived from human partners.
More formally, for a fixed domain let QRs (ap) = maxae Q
R(s, ae, ap), where QR is the optimal Q-function for the two player MDP with reward R (Boutilier, 1996; Oliehoek et al., 2008). In other words, QRs is the Q-function from the partner’s perspective at state s in the task with reward R assum-
ing best response by the ego agent. Then, the convention of a partner ρ : S×R|Ap| → Ap determines the partner’s action at state s, given by ρi(s,QRs ). For example, we might expect the distribution of actions across different partners at a state s to follow Boltzmann rationality (Ziebart et al., 2008): Eρ∼θρ [1[ρ(s,QRs ) = ap]] ∝ exp(QRs (ap)). Lastly, we assume that behavior at different states are uncorrelated: a choice of action at one state tells us nothing about actions at other states.
Given this formulation, we can hope to learn the underlying structure of a task by playing with a number of partners on the same task. For example, if many sampled partners all make the same action ap at a state s, it is likely that QRs (ap) is much higher than the second best action, i.e., the optimal action is dependent on the rules of the task as opposed to the arbitrariness of partner strategies. Therefore a new partner will likely take action ap as well. Additionally, when coordinating with the same partner across different tasks, we can expect to see developed conventions persist across states with similar Q-functions. In particular, if QR1s = Q R2 s for two different tasks with reward functions R1 and R2, then ρi(s,QR1s ) = ρ i(s,QR2s ), so a partner will take the same action at state s across the two tasks (e.g. context B in Figure 2 across Task 1 and 2).
We note that the roles of partners and tasks in our model are asymmetrical, assuming rational partners. A task can completely determine the actions of all partners at states with one good action (e.g. context A in Figure 2), whereas a partner cannot blindly pick the same action across all tasks. This asymmetry is reflected in our learning framework and our setup of adapting to new partners and new tasks.
5 LEARNING RULES AND CONVENTIONS
With the goal of adapting to new partners and tasks in mind, we propose a framework aimed at separating the rule representations associated with a task from the convention representations
associated with a partner. At training time, we assume access to a task and a set of partners sampled from the same partner distribution θρ. When adapting to a new partner sampled from θρ, we would like to learn a new convention representation but keep the learned rule representation. When adapting to a new task with the original partners, we would like the opposite: to learn a new rule representation but keep the learned convention representation. In our setup, the ego agent knows the identity of the tasks and partners, and the goal is to maximize the total reward with a new task and partner (the combination of which may be new to the ego agent). Naturally, this points us towards a modular architecture that enables integrating new combinations of partner and task representations.
We propose the architecture shown in Figure 4, where we learn to play with each of the training partners via reinforcement learning. Each rectangular module represents a multi-layer perceptron, and each box with grey bars represents a distribution over actions in the action space of the MDP. When training with multiple partners on the same task, the policy network uses one single shared task module gt, and uses one partner module gpi for each partner i. The task module takes the state observations s as input, and outputs latent variables z and an action distribution gt(a|s). Then, each partner module takes z as input, and outputs an action distribution gpi (a|z). Here, g p i (a|z) does not represent partner i’s action distribution, but rather represents our ego agent’s action distribution in response to partner i. The policy πi of the policy network for partner i is set to be proportional to the product of the action distributions.
πi(a|s) = 1
Zi gt(a|s) · gpi (a|z)
As it stands, many possible combinations of task and partner module outputs could lead to optimal action distributions πi(a|s)∀i. To encourage the task/partner modules to learn the rule/convention representations respectively, we include a regularization term that pushes gt(a|s) to be the marginal best-response strategy over partners at state s. In practice, this amounts to minimizing the Wasserstein distance between the task module’s output and the average best-response strategies πi.
D(s) = ∑ a∈A ∣∣∣∣∣gt(a|s)− 1n∑ i πi(a|s) ∣∣∣∣∣ By pushing the task module gt to learn the marginal best-response strategy, we can cleanly capture the rule-dependent representations of the task, to be used when adapting to a new partner. In fact, under some assumptions, we can show that gt is learning the optimal representation, as it is a sufficient statistic of the distribution over best-response strategies across possible partners.
In particular, for a fixed task let f(s, ae, ap) represent the probability of the ego agent taking action ae using its best-response strategy to partner action ap at state s. So, ∑ ae f(s, ae, ap) = 1. We say the ego agent is deterministic if it can only take on deterministic strategies: ∀s, ap ∃ ae : f(s, ae, ap) = 1. Lemma 1. For a deterministic ego agent, the marginal best-response strategy across partners is a sufficient statistic of the distribution of best-response strategies across partners.
Next, we lay out the learning procedure in more detail in Algorithm 1. We use the task module gt and the partner module gpi when playing with partner i. We push the task module to learn the marginal best-response action by adding the loss term D on top of a standard policy gradient (PG) loss term.
Algorithm 1: Learning Separate Representations for Partners and Tasks Input: A MDP M , n partners ρ1 . . . ρn ∼ θρ Output: Policy network modules for adapting to new partners from θρ and new tasks
1 Initialize the task module gt and n partner modules gp1 , . . . , g p n 2 G,T ← {gt, gp1 , . . . , gpn}, number of iterations 3 for j ← 1, T do 4 for i← 1, n do 5 Collect rollouts τ = (s,a, r) using {gt, gpi } with partner ρi on M 6 Update G with loss LPG(gt, gpi , τ) + λEs[D(s)]
Return: G
Adapting to new partner When adapting to a new partner, we reuse the task module gt along with a reinitialized partner module, and train both modules. At states with only one optimal action a? regardless of the partner’s convention, the marginal best-response strategy will concentrate on a?. As such, the task module gt will immediately push the ego agent to choose a?, improving adaptation time. In contrast, at states with many possible optimal actions depending on the partner’s conventions, the marginal best-response strategy will spread the action distribution among the possible optimal actions, allowing the ego agent to explore amongst the optimal options with the new partner.
Coordinating on a new task When faced with a new task, we would like to recognize the parts of the task where conventions with old partners on an old task carry over. In particular, in states for which the joint Q-function has not changed, our partners models will take the same actions as before, and our ego agent can employ the same response as before. With this in mind, suppose we have trained a task module gt and a set of 2n partner modules gp1 , . . . , g p 2n with a set of partners on an old task. We can learn to coordinate on a new task with partners 1 to n by fine-tuning the task module gt, paired with partner modules gp1 , . . . , g p n with frozen parameters. At states where the joint Q-function has not changed, the same latent variable outputs z will work well for all partners, so the task module will output the same z and the actions of the ego agent will not change. Then, we can coordinate with partners n+ 1 to 2n in a zero-shot manner by pairing gt with partner modules gpn+1, . . . , g p 2n.
5.1 UNDERSTANDING CONVENTIONS
When adapting to a new task with the same partner, we are relying on the hypothesis that our partners will carry over the same conventions developed on an earlier task, at states where the Q-function has not changed. Here, we would like to test this hypothesis and study if and when conventions persist across tasks, through a study of human-human interactions in the collaborative contextual bandit task.
User Study: Conventions in Human-Human Interactions. We conducted a study with 23 pairs of human partners (46 subjects), using the collaborative contextual bandit task from Section 3. The study was conducted via an online interface, with partners in separate rooms. The actions of partners are revealed to each other at the end of each try, and the partners had no other form of communication.
- Independent Variables: We vary the prize of the arms at different contexts, i.e., which arms are good to coordinate on. Pairs of partners were given 5 tries to coordinate on 3 different contexts (as shown
in Fig. 5). We then ask the same pair of partners to coordinate on the Test task with 3 new contexts. - Dependent Measures: We measure the score of each pair’s first try at each context. The maximum score in each run for each context is 1, corresponding to the two partners picking the same good arm. - Hypothesis: Our hypothesis is that partners can coordinate well on the test task by carrying over over conventions developed on the train task. - Results: On the left of Figure 5 we plot the average score of the first try (zero-shot) of our participants at solving each context in each task. In the Train task (grey bars), the partner pairs are coordinating for the first time. As expected, they generally had no trouble coordinating on Train-A (since there is only 1 good option), but scored lower on Train-B and Train-C (since there are 2 good options each).
Immediately following the 5 trials on the training task, the partner pairs coordinated on each context of the Test task (orange bars in Fig. 5), to see if they can leverage developed conventions on a new task. On Test-A, they did well since there is only one good option. However, they coordinated much better on Test-C compared to Test-B, even though both contexts have two good options. The main difference between the two contexts is that Test-C shares the same set of optimal actions (a2 and a4) as Train-C, whereas Test-B does not share the same set of optimal actions as Train-B.
Overall, this study suggests that human partners can successfully carry over conventions developed through repeated interactions with their partners, and further coordinate better when carrying over conventions across tasks at states where the set of optimal actions are similar (e.g. the higher gap for same context such as context C across the train and test tasks).
6 EXPERIMENTS
We experiment with our approach on three coordination tasks: collaborative contextual bandit, collaborative block placing, and Hanabi. We show results on adaptation to new partners for a fixed task, and zero-shot coordination with same partners on a new task. We compare with 1) a baseline (BaselineAgg) that learns a shared policy network to play with all training partners, using the average of the gradients with each partner and 2) a baseline (BaselineModular) that similarly uses a modular policy approach with separate modules for each partner, but does not explicitly represent the marginal best-response strategy, and 3) First-Order MAML (Nichol et al., 2018). For our method, we vary λ from 0.0 to 0.5: a higher value pushes the task module output to more closely match the marginal action distribution. We also test the use of a low dimensional z (the interface between the task and partner module). In general, we find that our method with regularization performs the best, and that it is unclear if using a low dimensional z is helpful. The partners in our experiments were either generated via self-play or were hand-designed to use varying conventions.
Contextual Bandit We study the collaborative contextual bandit task described in Section 3, using 4 contexts and 8 arms. We study variations of the task by altering the number of contexts with more than one good option (i.e. symmetries). We write “Arms m” to refer to a task having m contexts with symmetries – coordination should be easier for tasks having fewer contexts with symmetries.
In Figure 6a-c we show results on adaptation to new hand-designed partners (Figure 11 for self-play partners), varying the number of contexts with symmetries. We also plot the green oracle curve, by using our learning framework along with oracle best-response marginals (assuming uniform distribution over optimal actions across partners). To see if our method learns the correct marginals, we measure the Wasserstein distance between the learned marginals and the oracle marginals, in Figure 7. Using λ = 0.5 produces task module outputs that closely match the oracle marginals. To test zero-shot coordination with the same partners on new tasks, we tweak the task by altering contexts with only one good action, keeping the contexts with symmetries the same (similar to the change in Figure 2). We use hand-designed partners to make sure that partners use the same conventions at unchanged contexts. In Figure 10 we see that our method outperforms BaselineModular.
We include an additional plot in Figure 6d, where we instead use the Train task from the user study as shown in Figure 5. We use the data collected in our user study to create the partner policies, in lieu of the hand-designed partners. We observe similar trends – that the modular learning framework with marginal regularization performs the best.
Block Placing Next, we study a collaborative block placing task which involves a 2× 2 goal-grid, with a single red block and a single blue block on the grid. The goal is to reconstruct the goal-grid from a blank working-grid, with partner 1 (P1) only controlling a red block and partner 2 (P2) only controlling a blue block. The partners take turns moving their blocks on the working-grid, and can see each other’s actions on the working-grid. However, only P1 can see the goal-grid configuration; P2 only receives information from the working-grid. Each game lasts 6 turns, with P1 taking the first turn. Specifically, on P1’s turn, it can move the red block to one of the four cells, remove it from the grid, or pass the turn (similarly for P2 / blue block). Blocks cannot be placed on top of each other. The partners share rewards, and earn 10 points for each correctly placed block, for a maximum of 20 points per game. P1 needs to both place its own red block correctly, and communicate the position of the blue block to P2. As such, success in this task requires a mixture of rule-dependent and convention-dependent skills.
In Figures 9a and 9b we plot the results on adaptation to new hand-designed and self-play partners, where our method outperforms the baselines for non-zero values of λ. To verify that our task module is learning a good task representation, we compute the distance between the task module output and the oracle marginal (Figure 7). We see that our approach with λ = 0.5 leads to a smallest gap from the oracle marginals. Finally, similar to the contextual bandit task, we test zero-shot coordination with the same partners on new tasks in Figure 10. We tweak the task by changing the colors of target blocks, and use hand-designed partners to ensure the developed conventions persist. Again, our method performs better on the new task/partner combination than BaselineModular. We provide more details of the experimental setup in the Appendix.
Hanabi Finally, we experiment on a light version of the collaborative card game Hanabi, which has been studied in many related works in MARL. We focus on the two-player version of Hanabi, with 1 color and 5 ranks (for a maximum score of 5). Since it is not straightforward how to create a
variety of hand-designed partners, or tweak the task while maintaining the same symmetries, we use self-play partners only and omit experiments on coordinating on a new task with the same partners. The results on adaptation to new self-play partners are shown in Figure 9c.
7 CONCLUSION
We study the problem of adapting to new settings in multi-agent interactions. To develop AI-agents that quickly adapt to new partners and tasks, we introduced a framework that learns rule-dependent and convention-dependent representations. Our AI agents can adapt to conventions from new partners without re-learning the full complexities of a task, and can carry over existing conventions with same partners on new tasks. We run a study of human-human interactions that suggests human conventions persist across tasks with similar symmetries. Our experiments show improvements in adapting to new settings on a contextual bandit task, a block placing task, and Hanabi.
ACKNOWLEDGMENTS
This work was supported by NSF (#1941722, #2006388), and the DARPA HICON-LEARN project.
A PROOF OF LEMMA 1
Proof. The marginal of the best-response strategy across partners at state s of a task with reward R can be written as: p(ae) = Eρ∼θρ [f(s, ae, ρ(s,QRs ))] When the ego agent is deterministic, its best-response strategy f(s, ae, ap) can be rewritten as a function f ′ where f ′(s, ap) = ae if f(s, ae, ap) = 1. Then, we can write the likelihood of a best-response strategy from a newly sampled partner as a function of the marginal:
Pρ∼θρ [f ′(s, ρ(s,QRs )) = ae] = p(ae)
Loosely speaking, when the best-response strategies are not deterministic, storing only their marginals throws away information. But in the case when the strategies are deterministic, each strategy can be represented by a single action, so we are simply showing that storing their marginals does not throw away information.
B ARCHITECTURE DETAILS AND HYPERPARAMETERS
The specific size of each module in Figure 4 is shown in Table 1. We train the policy network using Proximal Policy Optimization (PPO) (Schulman et al., 2017) with the Stable Baselines software package (Raffin et al., 2019). For the state-value function, we use a network of the same size, and combine the value predictions from the task module and the partner module by adding them together.
C EXPERIMENTAL SETUP
Code for the experiments in our paper is available at https://github.com/ Stanford-ILIAD/Conventions-ModularPolicy.
C.1 CONTEXTUAL BANDIT
Task setup We use a contextual bandit with 4 contexts and 8 arms. We label the contexts 1, . . . , 4 and the arms 1, . . . , 8. In the initial setup, at context i the optimal actions (green boxes) are arms i
and i+ 4. So, all 4 contexts have symmetries, since there are multiple equally optimal actions. We experiment with different versions of the task by changing the number of contexts with symmetries, from 4 down to 1. To get m contexts with symmetries, we set the optimal action of a context i to only arm i, if i > m. So, contexts [1, . . . ,m] have two equally optimal actions, and contexts [m+1, . . . , 4] have only one optimal action. Naturally, when there are fewer contexts with symmetries, coordination is easier.
Partner setup We use both partners obtained by self-play and hand-designed partners. For the selfplay partners, we train using PPO with the hyperparameters shown in Table 2. For the hand-designed partners, the partners were designed to pick any one of the equally optimal actions and to stick to the same choice. The distribution over choices across the hand-designed partners was made to be uniform over the optimal actions. We used a set of 10 self-play partners and 4 hand-designed partners for training, and a set of the same size of testing.
Tweaked task For coordinating on a new task (Figure 10), we change the optimal action for contexts where there no symmetries. As such, this is only possible when the number of symmetries m is less than 4. For the new task, a context i where i > m, we modify the optimal action from i to i+ 4. The contexts with symmetries remain unchanged. For the hand-designed partners, we ensured that they stuck to the same choices (conventions) as the initial task in the contexts with symmetries.
C.2 BLOCK PLACING
Task setup We used the task setup described in Section 6. In our experiments, the ego agent is P1 and the partner agents are P2.
Partner setup We use both partners obtained by self-play and hand-designed partners. For the self-play partners, we train using PPO with the hyperparameters shown in Table 2. For the handdesigned partners, we design the partners to interpret the first action (turn 1) of the ego agent based on a fixed permutation of the block positions. For example, a possible partner may always place the blue block top-right whenever the ego agent places the red block top-left on turn 1. We used a set of 6 self-play partners and 3 hand-designed partners for training, and a set of 6 self-play partners and 6 hand-designed partners for testing.
Tweaked task For coordinating on a new task (Figure 10), we change the reward function by reversing the interpretation of the red and white cells in the goal grid. In other words, P1 has to place the red block on the working grid in one of the two positions that is a white cell on the goal grid. P2’s goal is unchanged; P2 still has to place the blue block where the blue block appears in the goal grid, so the relevant symmetries remain unchanged. For the hand-designed partners, we ensured that they stuck to the same permutations (conventions) as the initial task when interpreting the ego agent’s action on turn 1.
C.3 HANABI
Task setup We used the Hanabi Learning Environment package (Bard et al., 2020), with the following configuration: 1 color, 5 ranks, 2 players, hand size 2, 3 information tokens, and 3 life tokens. The maximum score is 5 points.
Partner setup We use only partners obtained by self-play. For the self-play partners, we train using PPO with the hyperparameters shown in Table 2. We used a set of 4 self-play partners for training, and a set of the same size for testing.
C.4 ADDITIONAL PLOTS
C.5 USER STUDY INDIVIDUAL RESULTS
Here we provide more detailed results from our user study. For each context and task we report the joint actions made by each pair of users on their first try and on their fifth try. | 1. What is the focus of the paper regarding multi-agent interactions?
2. What are the key contributions of the proposed model?
3. What are the issues with the current model description, according to the reviewer?
4. How can the authors improve the clarity of their presentation?
5. What additional information would be helpful to provide regarding the human experiment? | Review | Review
The paper proposes an interesting model to study multi-agent interactions, in uncertain environments. In s nutshell, the model proposed consists of a MDP (Finite state, action, horizon) and two players playing simultaneously (the turn-by-turn model can be subsumed by the simultaneous move model as stated in the paper). In the absence of learning, the finite MDP has an optimal solution. The key contribution of the paper is to focus on instances when the optimal solution is not unique. In a two-player model, this requires symmetry breaking in order for a sample path of the MDP to track the optimal trajectory.
The above is when the MDP and rewards are all well known. The setting in the paper concerns a learning situation where some or all of the components of the MDP is unknown. In this case, the agents must learn the MDP, while breaking symmetry (coordinate) in converging on a sample path closest to an optimal one. The problem, is very interesting and the authors propose a nice formulation to study these questions.
I have a few high-level suggestions to the authors.
The model description is mathematically imprecise. Are the agents aiming to optimize the total reward collected, in presence of unknown model and partner ? What information about the partners are known to the agent ? (Is the agent distribution common information ? ) Are the partners assumed to know the underlying MDP, or they are also simultaneously learning ? If the partner are also simultaneously learning, is their "state of knowledge" at the beginning known to the ego agent ?
I understand that having a robust solution to all of the above problems is perhaps too hard. Nevertheless, clarifying the precise setup mathematically will greatly aid the reader.
The human experiment, were the users able to communicate to each other in any way ? Were they total strangers to each other, or known acquaintances ? Clearly specifying the "conditions of the experiment", can help take the results in context.
Overall, I believe the paper is attempting to study an interesting (and hard) problem. But my (low) rating is based on the clarity of the presentation. |
ICLR | Title
On the Critical Role of Conventions in Adaptive Human-AI Collaboration
Abstract
Humans can quickly adapt to new partners in collaborative tasks (e.g. playing basketball), because they understand which fundamental skills of the task (e.g. how to dribble, how to shoot) carry over across new partners. Humans can also quickly adapt to similar tasks with the same partners by carrying over conventions that they have developed (e.g. raising hand signals pass the ball), without learning to coordinate from scratch. To collaborate seamlessly with humans, AI agents should adapt quickly to new partners and new tasks as well. However, current approaches have not attempted to distinguish between the complexities intrinsic to a task and the conventions used by a partner, and more generally there has been little focus on leveraging conventions for adapting to new settings. In this work, we propose a learning framework that teases apart rule-dependent representation from convention-dependent representation in a principled way. We show that, under some assumptions, our rule-dependent representation is a sufficient statistic of the distribution over best-response strategies across partners. Using this separation of representations, our agents are able to adapt quickly to new partners, and to coordinate with old partners on new tasks in a zero-shot manner. We experimentally validate our approach on three collaborative tasks varying in complexity: a contextual multi-armed bandit, a block placing task, and the card game Hanabi.
1 INTRODUCTION
Humans collaborate well together in complex tasks by adapting to each other through repeated interactions. What emerges from these repeated interactions is shared knowledge about the interaction history. We intuitively refer to this shared knowledge as conventions. Convention formation helps explain why teammates collaborate better than groups of strangers, and why friends develop lingo incomprehensible to outsiders. The notion of conventions has been studied in language (Clark & Wilkes-Gibbs, 1986; Clark, 1996; Hawkins et al., 2017; Khani et al., 2018) and also alluded to in more general multiagent collaborative tasks (Boutilier, 1996; Stone et al., 2010; Foerster et al., 2019; Carroll et al., 2019; Lerer & Peysakhovich, 2019; Hu et al., 2020). For example, Foerster et al. (2019) trained agents to play the card game Hanabi, and noted the emergent convention that “hinting for red or yellow indicates that the newest card of the other player is playable”.
One established characterization of a convention that is commonly used (Boutilier, 1996; Hawkins et al., 2017; Lerer & Peysakhovich, 2019) is an arbitrary solution to a recurring coordination problem (Lewis, 1969). A convention is thus one of many possible solutions that a group of partners happens to converge to. This is in contrast to problem solutions that are enforced by the rule constraints, and would have arisen no matter how the partners collaborated and what behavior they converged to. Success in a collaborative task typically involves learning both types of knowledge, which we will refer to as convention-dependent and rule-dependent behavior. The distinction between these two types of behavior has been studied extensively in the linguistic literature (Franke et al.; Brochhagen, 2020). In this work, we focus on the less-explored setting of implicit communication, and provide a concrete approach to learn representations for these two different types of behavior.
In the context of multi-agent or human-AI collaboration, we would like our AI agents to adapt quickly to new partners. AI agents should be able to flexibly adjust their partner-specific convention-
dependent behavior while reusing the same rule-dependent behavior to simplify the learning problem – just like how humans quickly adapt when playing basketball with a new friend, without the need to re-learn the rules of the game. We would also like our AI agents to coordinate well on similar tasks when paired with the same partners – just like how humans can coordinate better when playing a new sport but with an old friend.
Although many existing works similarly recognize conventions as an important factor of successful collaboration, they do not focus on separating conventions from rules of the task. This means that adapting to a new partner can be as difficult as learning a new task altogether. For example, existing techniques that emphasize modeling the partner’s policy, such as theory of mind (Simon, 1995; Baker et al., 2017; Brooks & Szafir, 2019) or multi-agent reinforcement learning (Foerster et al., 2018), attempt to model everything about the agent’s belief of the partner’s state and policies. Such belief modeling approaches very quickly become computationally intractable, as opposed to solely focusing on the relevant conventions developed with a partner.
To address the challenges above, we propose a framework that explicitly separates conventiondependent representations and rule-dependent representations through repeated interactions with multiple partners. After many rounds of solving a task (e.g. playing basketball) with different partners, an AI agent can learn to distinguish between conventions formed with a specific partner (e.g. pointing down signals bounce pass) and intrinsic complexities of the task (e.g. dribbling). This enables us to leverage the representations separately for fast adaptation to new interactive settings.
In the rest of this paper, we formalize the problem setting, and describe the underlying model of partners and tasks. Next, we present our framework for learning separations between rule and convention representations. We show that, under some conditions, our rule representation learns a sufficient statistic of the distribution over best-response strategies. We then run a study on humanhuman interactions to test if our hypothesis – that partners can carry over the same conventions across tasks – indeed holds for human subjects. Finally, we show the merits of our method on 3 collaborative tasks: contextual bandit, block placing, and a small scale version of Hanabi (Bard et al., 2020).
2 RELATED WORK
Convention formation has been studied under the form of iterated reference games (Hawkins et al., 2017; 2019), and language emergence (Mordatch & Abbeel, 2018; Lazaridou et al., 2017). In these works, partners learn to reference objects more efficiently by forming conventions through repeated interactions. But in these tasks, there is little intrinsic task difficulty beyond breaking symmetries.
In multi-agent reinforcement learning, techniques such as self-play, cross-training, or opponent learning (Foerster et al., 2018) have been used to observe emergent behavior in complex, physical settings (Nikolaidis & Shah, 2013; Liu et al., 2019; Baker et al., 2020). In addition, convention formation has been shown in tasks like Hanabi (Foerster et al., 2019; Hu & Foerster, 2019), Overcooked (Carroll et al., 2019; Wang et al., 2020), and negotiation tasks (Cao et al., 2018), where agents learn both how to solve a task and how to coordinate with a partner. But, these works qualitatively analyze emergent conventions through post-hoc inspection of the learned policies, and do not learn representations
that separate conventions from rule-dependent behavior. This makes it difficult to leverage learned conventions when adapting to new partners. More recently, an approach was proposed to design agents that avoid relying on conventions altogether; instead, the agents aim for unambiguous but potentially sub-optimal strategies (Hu et al., 2020).
Other approaches for studying conventions include game theory (Lerer & Peysakhovich, 2019) and theory of mind (Simon, 1995; Rabinowitz et al., 2018). Pragmatic reasoning (Goodman & Frank, 2016) is another framework that aims to estimate beliefs over the partner’s states and policies, but all these frameworks that rely on modelling beliefs quickly become intractable. Instead of approximating these belief modelling frameworks (Monroe & Potts, 2015), we instead focus on learning representations for rules and conventions. Our work shares similarities with the paradigms of meta-learning (Schmidhuber, 1987; Finn et al., 2017) or multi-task learning (Caruana, 1997), if we view collaboration with different partners as new but related tasks. However, we are also interested in how conventions persist across tasks with similar symmetries.
More related to our work is perhaps modular policy networks (Devin et al., 2017), which learns policy modules and composes them together to transfer across robot arms with different degrees of freedom and across different tasks. However, their approach relies on input cleanly split between background observations of the task, and robot’s internal observations. We do not assume such a split in the input; our approach tries to learn the separation between rules and partners with one input channel.
3 PRELIMINARIES
We begin by formally defining our problem setting as a two-player Markov Decision Process (MDP).
Two-Player MDP We consider a two-agent MDP with identical payoffs, which is a tuple {S, {Ae, Ap}, P,R}. S is a set of states, Ae is a set of actions for agent e, Ap is a set of actions for agent p. In general, e represents the ego agent (that we control), and p represents agents who partner with e. P : S × Ae × Ap × S → [0, 1] is the probability of reaching a state given the current state and actions of all agents, and R : S × S → R is the real-valued reward function. Since we are interested in repeated interactions, we consider finite-horizon MDPs. A policy π is a stochastic mapping from a state to an action. For our setting, our policy π = (πe, πp) has two parts: πe : S → Ae and πp : S → Ap, mapping the state into actions for agent e and p respectively. We also consider collaborative tasks that are partially observable in our experiments. In addition, some tasks are turn-based, which can be handled by including the turn information as part of the state, and ignoring the other player’s actions. Here, we focus on tasks with discrete state and action spaces.
Running Example: Collaborative Contextual Bandit We describe a collaborative task to ground our discussion around adapting to new partners, and adapting to new tasks with old partners. Consider a contextual bandit setting with 2 contexts, 4 actions, and a prize for each action. The two players each independently pick an arm to pull, and they score prize(a) points if they both picked the same arm a; otherwise, they score 0 points. An example is depicted in Figure 2, where green boxes represent arms with prize 1, the rest have prize 0, and red and blue circles show the player’s actions.
In Task 1 of Figure 2, context A only has one good arm a1 while the context B has two good arms a2 and a4. After repeated rounds of playing, two agents can eventually converge to coordinating on a convention that chooses one of the two good arms in context B (e.g. selecting the leftmost good arm a2). When the task shifts slightly as shown in Task 2 but context B remains the same across the two
tasks, we can reasonably expect the partners to adhere to the same convention they developed for context B of Task 1 when playing context B of Task 2.
There are two axes of generalization at play. First, for a fixed task we would like to learn the underlying structure of the task (e.g. green arm locations) to quickly adapt to a new partner when playing the same task. Second, for a fixed partner we would like to keep track of developed conventions to quickly coordinate with this partner on a new task. In the next sections, we define the notion of a new task and a new partner, and propose a framework to effectively generalize across both dimensions.
4 MODEL OF PARTNERS AND TASKS
We consider a family of tasks that share the same state space, action space, and transition dynamics (all of which we refer to as the domain), but can differ in the reward function of the underlying MDP. We also consider a family of partners that our ego agent would like to coordinate with. The underlying model governing the actions taken by a partner at a state of a given task can be seen in Figure 3. For a new task, a new reward function is sampled from θR, while the domain of the MDP is fixed. For a new partner, we sample from θρ a new function ρ, which maps the Q-function at a state of the task to an action. We refer to ρ as the convention of a partner. In other words, the action aits of a partner i at state s of task t depends on the partner’s conventions and (indirectly through the Q-function at state s) the reward of the task. θρ is the distribution over conventions and can correspond to, for example, the distribution over emergent conventions from AI-agents trained from selfplay, or can be derived from human partners.
More formally, for a fixed domain let QRs (ap) = maxae Q
R(s, ae, ap), where QR is the optimal Q-function for the two player MDP with reward R (Boutilier, 1996; Oliehoek et al., 2008). In other words, QRs is the Q-function from the partner’s perspective at state s in the task with reward R assum-
ing best response by the ego agent. Then, the convention of a partner ρ : S×R|Ap| → Ap determines the partner’s action at state s, given by ρi(s,QRs ). For example, we might expect the distribution of actions across different partners at a state s to follow Boltzmann rationality (Ziebart et al., 2008): Eρ∼θρ [1[ρ(s,QRs ) = ap]] ∝ exp(QRs (ap)). Lastly, we assume that behavior at different states are uncorrelated: a choice of action at one state tells us nothing about actions at other states.
Given this formulation, we can hope to learn the underlying structure of a task by playing with a number of partners on the same task. For example, if many sampled partners all make the same action ap at a state s, it is likely that QRs (ap) is much higher than the second best action, i.e., the optimal action is dependent on the rules of the task as opposed to the arbitrariness of partner strategies. Therefore a new partner will likely take action ap as well. Additionally, when coordinating with the same partner across different tasks, we can expect to see developed conventions persist across states with similar Q-functions. In particular, if QR1s = Q R2 s for two different tasks with reward functions R1 and R2, then ρi(s,QR1s ) = ρ i(s,QR2s ), so a partner will take the same action at state s across the two tasks (e.g. context B in Figure 2 across Task 1 and 2).
We note that the roles of partners and tasks in our model are asymmetrical, assuming rational partners. A task can completely determine the actions of all partners at states with one good action (e.g. context A in Figure 2), whereas a partner cannot blindly pick the same action across all tasks. This asymmetry is reflected in our learning framework and our setup of adapting to new partners and new tasks.
5 LEARNING RULES AND CONVENTIONS
With the goal of adapting to new partners and tasks in mind, we propose a framework aimed at separating the rule representations associated with a task from the convention representations
associated with a partner. At training time, we assume access to a task and a set of partners sampled from the same partner distribution θρ. When adapting to a new partner sampled from θρ, we would like to learn a new convention representation but keep the learned rule representation. When adapting to a new task with the original partners, we would like the opposite: to learn a new rule representation but keep the learned convention representation. In our setup, the ego agent knows the identity of the tasks and partners, and the goal is to maximize the total reward with a new task and partner (the combination of which may be new to the ego agent). Naturally, this points us towards a modular architecture that enables integrating new combinations of partner and task representations.
We propose the architecture shown in Figure 4, where we learn to play with each of the training partners via reinforcement learning. Each rectangular module represents a multi-layer perceptron, and each box with grey bars represents a distribution over actions in the action space of the MDP. When training with multiple partners on the same task, the policy network uses one single shared task module gt, and uses one partner module gpi for each partner i. The task module takes the state observations s as input, and outputs latent variables z and an action distribution gt(a|s). Then, each partner module takes z as input, and outputs an action distribution gpi (a|z). Here, g p i (a|z) does not represent partner i’s action distribution, but rather represents our ego agent’s action distribution in response to partner i. The policy πi of the policy network for partner i is set to be proportional to the product of the action distributions.
πi(a|s) = 1
Zi gt(a|s) · gpi (a|z)
As it stands, many possible combinations of task and partner module outputs could lead to optimal action distributions πi(a|s)∀i. To encourage the task/partner modules to learn the rule/convention representations respectively, we include a regularization term that pushes gt(a|s) to be the marginal best-response strategy over partners at state s. In practice, this amounts to minimizing the Wasserstein distance between the task module’s output and the average best-response strategies πi.
D(s) = ∑ a∈A ∣∣∣∣∣gt(a|s)− 1n∑ i πi(a|s) ∣∣∣∣∣ By pushing the task module gt to learn the marginal best-response strategy, we can cleanly capture the rule-dependent representations of the task, to be used when adapting to a new partner. In fact, under some assumptions, we can show that gt is learning the optimal representation, as it is a sufficient statistic of the distribution over best-response strategies across possible partners.
In particular, for a fixed task let f(s, ae, ap) represent the probability of the ego agent taking action ae using its best-response strategy to partner action ap at state s. So, ∑ ae f(s, ae, ap) = 1. We say the ego agent is deterministic if it can only take on deterministic strategies: ∀s, ap ∃ ae : f(s, ae, ap) = 1. Lemma 1. For a deterministic ego agent, the marginal best-response strategy across partners is a sufficient statistic of the distribution of best-response strategies across partners.
Next, we lay out the learning procedure in more detail in Algorithm 1. We use the task module gt and the partner module gpi when playing with partner i. We push the task module to learn the marginal best-response action by adding the loss term D on top of a standard policy gradient (PG) loss term.
Algorithm 1: Learning Separate Representations for Partners and Tasks Input: A MDP M , n partners ρ1 . . . ρn ∼ θρ Output: Policy network modules for adapting to new partners from θρ and new tasks
1 Initialize the task module gt and n partner modules gp1 , . . . , g p n 2 G,T ← {gt, gp1 , . . . , gpn}, number of iterations 3 for j ← 1, T do 4 for i← 1, n do 5 Collect rollouts τ = (s,a, r) using {gt, gpi } with partner ρi on M 6 Update G with loss LPG(gt, gpi , τ) + λEs[D(s)]
Return: G
Adapting to new partner When adapting to a new partner, we reuse the task module gt along with a reinitialized partner module, and train both modules. At states with only one optimal action a? regardless of the partner’s convention, the marginal best-response strategy will concentrate on a?. As such, the task module gt will immediately push the ego agent to choose a?, improving adaptation time. In contrast, at states with many possible optimal actions depending on the partner’s conventions, the marginal best-response strategy will spread the action distribution among the possible optimal actions, allowing the ego agent to explore amongst the optimal options with the new partner.
Coordinating on a new task When faced with a new task, we would like to recognize the parts of the task where conventions with old partners on an old task carry over. In particular, in states for which the joint Q-function has not changed, our partners models will take the same actions as before, and our ego agent can employ the same response as before. With this in mind, suppose we have trained a task module gt and a set of 2n partner modules gp1 , . . . , g p 2n with a set of partners on an old task. We can learn to coordinate on a new task with partners 1 to n by fine-tuning the task module gt, paired with partner modules gp1 , . . . , g p n with frozen parameters. At states where the joint Q-function has not changed, the same latent variable outputs z will work well for all partners, so the task module will output the same z and the actions of the ego agent will not change. Then, we can coordinate with partners n+ 1 to 2n in a zero-shot manner by pairing gt with partner modules gpn+1, . . . , g p 2n.
5.1 UNDERSTANDING CONVENTIONS
When adapting to a new task with the same partner, we are relying on the hypothesis that our partners will carry over the same conventions developed on an earlier task, at states where the Q-function has not changed. Here, we would like to test this hypothesis and study if and when conventions persist across tasks, through a study of human-human interactions in the collaborative contextual bandit task.
User Study: Conventions in Human-Human Interactions. We conducted a study with 23 pairs of human partners (46 subjects), using the collaborative contextual bandit task from Section 3. The study was conducted via an online interface, with partners in separate rooms. The actions of partners are revealed to each other at the end of each try, and the partners had no other form of communication.
- Independent Variables: We vary the prize of the arms at different contexts, i.e., which arms are good to coordinate on. Pairs of partners were given 5 tries to coordinate on 3 different contexts (as shown
in Fig. 5). We then ask the same pair of partners to coordinate on the Test task with 3 new contexts. - Dependent Measures: We measure the score of each pair’s first try at each context. The maximum score in each run for each context is 1, corresponding to the two partners picking the same good arm. - Hypothesis: Our hypothesis is that partners can coordinate well on the test task by carrying over over conventions developed on the train task. - Results: On the left of Figure 5 we plot the average score of the first try (zero-shot) of our participants at solving each context in each task. In the Train task (grey bars), the partner pairs are coordinating for the first time. As expected, they generally had no trouble coordinating on Train-A (since there is only 1 good option), but scored lower on Train-B and Train-C (since there are 2 good options each).
Immediately following the 5 trials on the training task, the partner pairs coordinated on each context of the Test task (orange bars in Fig. 5), to see if they can leverage developed conventions on a new task. On Test-A, they did well since there is only one good option. However, they coordinated much better on Test-C compared to Test-B, even though both contexts have two good options. The main difference between the two contexts is that Test-C shares the same set of optimal actions (a2 and a4) as Train-C, whereas Test-B does not share the same set of optimal actions as Train-B.
Overall, this study suggests that human partners can successfully carry over conventions developed through repeated interactions with their partners, and further coordinate better when carrying over conventions across tasks at states where the set of optimal actions are similar (e.g. the higher gap for same context such as context C across the train and test tasks).
6 EXPERIMENTS
We experiment with our approach on three coordination tasks: collaborative contextual bandit, collaborative block placing, and Hanabi. We show results on adaptation to new partners for a fixed task, and zero-shot coordination with same partners on a new task. We compare with 1) a baseline (BaselineAgg) that learns a shared policy network to play with all training partners, using the average of the gradients with each partner and 2) a baseline (BaselineModular) that similarly uses a modular policy approach with separate modules for each partner, but does not explicitly represent the marginal best-response strategy, and 3) First-Order MAML (Nichol et al., 2018). For our method, we vary λ from 0.0 to 0.5: a higher value pushes the task module output to more closely match the marginal action distribution. We also test the use of a low dimensional z (the interface between the task and partner module). In general, we find that our method with regularization performs the best, and that it is unclear if using a low dimensional z is helpful. The partners in our experiments were either generated via self-play or were hand-designed to use varying conventions.
Contextual Bandit We study the collaborative contextual bandit task described in Section 3, using 4 contexts and 8 arms. We study variations of the task by altering the number of contexts with more than one good option (i.e. symmetries). We write “Arms m” to refer to a task having m contexts with symmetries – coordination should be easier for tasks having fewer contexts with symmetries.
In Figure 6a-c we show results on adaptation to new hand-designed partners (Figure 11 for self-play partners), varying the number of contexts with symmetries. We also plot the green oracle curve, by using our learning framework along with oracle best-response marginals (assuming uniform distribution over optimal actions across partners). To see if our method learns the correct marginals, we measure the Wasserstein distance between the learned marginals and the oracle marginals, in Figure 7. Using λ = 0.5 produces task module outputs that closely match the oracle marginals. To test zero-shot coordination with the same partners on new tasks, we tweak the task by altering contexts with only one good action, keeping the contexts with symmetries the same (similar to the change in Figure 2). We use hand-designed partners to make sure that partners use the same conventions at unchanged contexts. In Figure 10 we see that our method outperforms BaselineModular.
We include an additional plot in Figure 6d, where we instead use the Train task from the user study as shown in Figure 5. We use the data collected in our user study to create the partner policies, in lieu of the hand-designed partners. We observe similar trends – that the modular learning framework with marginal regularization performs the best.
Block Placing Next, we study a collaborative block placing task which involves a 2× 2 goal-grid, with a single red block and a single blue block on the grid. The goal is to reconstruct the goal-grid from a blank working-grid, with partner 1 (P1) only controlling a red block and partner 2 (P2) only controlling a blue block. The partners take turns moving their blocks on the working-grid, and can see each other’s actions on the working-grid. However, only P1 can see the goal-grid configuration; P2 only receives information from the working-grid. Each game lasts 6 turns, with P1 taking the first turn. Specifically, on P1’s turn, it can move the red block to one of the four cells, remove it from the grid, or pass the turn (similarly for P2 / blue block). Blocks cannot be placed on top of each other. The partners share rewards, and earn 10 points for each correctly placed block, for a maximum of 20 points per game. P1 needs to both place its own red block correctly, and communicate the position of the blue block to P2. As such, success in this task requires a mixture of rule-dependent and convention-dependent skills.
In Figures 9a and 9b we plot the results on adaptation to new hand-designed and self-play partners, where our method outperforms the baselines for non-zero values of λ. To verify that our task module is learning a good task representation, we compute the distance between the task module output and the oracle marginal (Figure 7). We see that our approach with λ = 0.5 leads to a smallest gap from the oracle marginals. Finally, similar to the contextual bandit task, we test zero-shot coordination with the same partners on new tasks in Figure 10. We tweak the task by changing the colors of target blocks, and use hand-designed partners to ensure the developed conventions persist. Again, our method performs better on the new task/partner combination than BaselineModular. We provide more details of the experimental setup in the Appendix.
Hanabi Finally, we experiment on a light version of the collaborative card game Hanabi, which has been studied in many related works in MARL. We focus on the two-player version of Hanabi, with 1 color and 5 ranks (for a maximum score of 5). Since it is not straightforward how to create a
variety of hand-designed partners, or tweak the task while maintaining the same symmetries, we use self-play partners only and omit experiments on coordinating on a new task with the same partners. The results on adaptation to new self-play partners are shown in Figure 9c.
7 CONCLUSION
We study the problem of adapting to new settings in multi-agent interactions. To develop AI-agents that quickly adapt to new partners and tasks, we introduced a framework that learns rule-dependent and convention-dependent representations. Our AI agents can adapt to conventions from new partners without re-learning the full complexities of a task, and can carry over existing conventions with same partners on new tasks. We run a study of human-human interactions that suggests human conventions persist across tasks with similar symmetries. Our experiments show improvements in adapting to new settings on a contextual bandit task, a block placing task, and Hanabi.
ACKNOWLEDGMENTS
This work was supported by NSF (#1941722, #2006388), and the DARPA HICON-LEARN project.
A PROOF OF LEMMA 1
Proof. The marginal of the best-response strategy across partners at state s of a task with reward R can be written as: p(ae) = Eρ∼θρ [f(s, ae, ρ(s,QRs ))] When the ego agent is deterministic, its best-response strategy f(s, ae, ap) can be rewritten as a function f ′ where f ′(s, ap) = ae if f(s, ae, ap) = 1. Then, we can write the likelihood of a best-response strategy from a newly sampled partner as a function of the marginal:
Pρ∼θρ [f ′(s, ρ(s,QRs )) = ae] = p(ae)
Loosely speaking, when the best-response strategies are not deterministic, storing only their marginals throws away information. But in the case when the strategies are deterministic, each strategy can be represented by a single action, so we are simply showing that storing their marginals does not throw away information.
B ARCHITECTURE DETAILS AND HYPERPARAMETERS
The specific size of each module in Figure 4 is shown in Table 1. We train the policy network using Proximal Policy Optimization (PPO) (Schulman et al., 2017) with the Stable Baselines software package (Raffin et al., 2019). For the state-value function, we use a network of the same size, and combine the value predictions from the task module and the partner module by adding them together.
C EXPERIMENTAL SETUP
Code for the experiments in our paper is available at https://github.com/ Stanford-ILIAD/Conventions-ModularPolicy.
C.1 CONTEXTUAL BANDIT
Task setup We use a contextual bandit with 4 contexts and 8 arms. We label the contexts 1, . . . , 4 and the arms 1, . . . , 8. In the initial setup, at context i the optimal actions (green boxes) are arms i
and i+ 4. So, all 4 contexts have symmetries, since there are multiple equally optimal actions. We experiment with different versions of the task by changing the number of contexts with symmetries, from 4 down to 1. To get m contexts with symmetries, we set the optimal action of a context i to only arm i, if i > m. So, contexts [1, . . . ,m] have two equally optimal actions, and contexts [m+1, . . . , 4] have only one optimal action. Naturally, when there are fewer contexts with symmetries, coordination is easier.
Partner setup We use both partners obtained by self-play and hand-designed partners. For the selfplay partners, we train using PPO with the hyperparameters shown in Table 2. For the hand-designed partners, the partners were designed to pick any one of the equally optimal actions and to stick to the same choice. The distribution over choices across the hand-designed partners was made to be uniform over the optimal actions. We used a set of 10 self-play partners and 4 hand-designed partners for training, and a set of the same size of testing.
Tweaked task For coordinating on a new task (Figure 10), we change the optimal action for contexts where there no symmetries. As such, this is only possible when the number of symmetries m is less than 4. For the new task, a context i where i > m, we modify the optimal action from i to i+ 4. The contexts with symmetries remain unchanged. For the hand-designed partners, we ensured that they stuck to the same choices (conventions) as the initial task in the contexts with symmetries.
C.2 BLOCK PLACING
Task setup We used the task setup described in Section 6. In our experiments, the ego agent is P1 and the partner agents are P2.
Partner setup We use both partners obtained by self-play and hand-designed partners. For the self-play partners, we train using PPO with the hyperparameters shown in Table 2. For the handdesigned partners, we design the partners to interpret the first action (turn 1) of the ego agent based on a fixed permutation of the block positions. For example, a possible partner may always place the blue block top-right whenever the ego agent places the red block top-left on turn 1. We used a set of 6 self-play partners and 3 hand-designed partners for training, and a set of 6 self-play partners and 6 hand-designed partners for testing.
Tweaked task For coordinating on a new task (Figure 10), we change the reward function by reversing the interpretation of the red and white cells in the goal grid. In other words, P1 has to place the red block on the working grid in one of the two positions that is a white cell on the goal grid. P2’s goal is unchanged; P2 still has to place the blue block where the blue block appears in the goal grid, so the relevant symmetries remain unchanged. For the hand-designed partners, we ensured that they stuck to the same permutations (conventions) as the initial task when interpreting the ego agent’s action on turn 1.
C.3 HANABI
Task setup We used the Hanabi Learning Environment package (Bard et al., 2020), with the following configuration: 1 color, 5 ranks, 2 players, hand size 2, 3 information tokens, and 3 life tokens. The maximum score is 5 points.
Partner setup We use only partners obtained by self-play. For the self-play partners, we train using PPO with the hyperparameters shown in Table 2. We used a set of 4 self-play partners for training, and a set of the same size for testing.
C.4 ADDITIONAL PLOTS
C.5 USER STUDY INDIVIDUAL RESULTS
Here we provide more detailed results from our user study. For each context and task we report the joint actions made by each pair of users on their first try and on their fifth try. | 1. What is the main contribution of the paper regarding cooperative tasks and modular architecture?
2. What are the strengths and weaknesses of the proposed technique, particularly in comparison to other methods?
3. How does the reviewer assess the clarity and impact of the paper's content, especially in the experiments section?
4. Are there any suggestions or recommendations for improving the paper's quality and significance? | Review | Review
This paper makes the observation that when performing cooperative tasks with partners, there are two components to learn: how to perform the task, and how to coordinate with the partner according to conventions. Therefore, it proposes to separate these two components via a modular architecture, which learns a task-specific action distribution that attempts to marginalize out all possible partner conventions, and a series of partner-specific action distributions. The agent's own policy is the product of these two distributions. When coordinating with a new partner, the partner-specific component is learned from scratch using a pre-trained task-specific component, and vice versa.
The paper goes after an ambitious and useful problem (rapid adaptation to coordinating with novel partners on new tasks), and proposes a novel technique for doing so. A weakness is that the paper does not use reasonable baselines, and effectively compares only to ablations of their own model. Why not compare to a meta-learning technique? Or compare to some of the existing SOTA methods for Hanabi?
In general the paper is well written, but it could be made significantly clearer by providing further details on how the partner action distributions g^p_i(a|s) are obtained. Given the explanation in the beginning of Section 4, I was initially under the impression that these represented the partner's policy distribution produced by its Q-values, or perhaps the partner's actual action frequencies obtained from observing trajectories. However, it seems that the model is learned entirely end-to-end, and so these distributions actually represent how the agent's own policy should be modified according to which partner it is playing with. Is this correct? If so, this explanation should be added to the paper to make it more clear how the technique can apply beyond simple domains like the contextual bandit, in which agents must choose the same actions as the partner.
The fact that the partner module must be re-initialized and learned from scratch for each new partner is a weakness of the method. Why not learn some type of partner embedding that would enable generalization to new partners at test time that use similar conventions to training partners?
The experiments section of the paper felt rushed and lacking in explanation compared to the first 6 pages. The clarity/impact could be enhanced by explaining the experiments in more detail. In particular, the block placing task is not explained (do agents place blocks separately? do they have to place a block together at the same time?). Also, the need for "hand-designed" partners is not explained, nor is what they are hand-designed to do.
Since the paper collects a human user study on conventions, why not test how well the trained models are able to coordinate with humans? This would significantly enhance the impact of the paper.
Other suggestions:
Figure 2 caption does not include the explanation that agents must choose the same action to get a reward
A legend should be added to Figures 7, 8, and 9.
Figure 7 is interesting in that even without the Wasserstein distance penalty (when lambda=0), the Wasserstein distance to the ground truth marginal best response is still low, suggesting the model is learning some level of task-specific representation just due to the architecture. This could be explained further in the text.
Edit: I have updated my score based on the new experiments added during the rebuttal process. |
ICLR | Title
On the Critical Role of Conventions in Adaptive Human-AI Collaboration
Abstract
Humans can quickly adapt to new partners in collaborative tasks (e.g. playing basketball), because they understand which fundamental skills of the task (e.g. how to dribble, how to shoot) carry over across new partners. Humans can also quickly adapt to similar tasks with the same partners by carrying over conventions that they have developed (e.g. raising hand signals pass the ball), without learning to coordinate from scratch. To collaborate seamlessly with humans, AI agents should adapt quickly to new partners and new tasks as well. However, current approaches have not attempted to distinguish between the complexities intrinsic to a task and the conventions used by a partner, and more generally there has been little focus on leveraging conventions for adapting to new settings. In this work, we propose a learning framework that teases apart rule-dependent representation from convention-dependent representation in a principled way. We show that, under some assumptions, our rule-dependent representation is a sufficient statistic of the distribution over best-response strategies across partners. Using this separation of representations, our agents are able to adapt quickly to new partners, and to coordinate with old partners on new tasks in a zero-shot manner. We experimentally validate our approach on three collaborative tasks varying in complexity: a contextual multi-armed bandit, a block placing task, and the card game Hanabi.
1 INTRODUCTION
Humans collaborate well together in complex tasks by adapting to each other through repeated interactions. What emerges from these repeated interactions is shared knowledge about the interaction history. We intuitively refer to this shared knowledge as conventions. Convention formation helps explain why teammates collaborate better than groups of strangers, and why friends develop lingo incomprehensible to outsiders. The notion of conventions has been studied in language (Clark & Wilkes-Gibbs, 1986; Clark, 1996; Hawkins et al., 2017; Khani et al., 2018) and also alluded to in more general multiagent collaborative tasks (Boutilier, 1996; Stone et al., 2010; Foerster et al., 2019; Carroll et al., 2019; Lerer & Peysakhovich, 2019; Hu et al., 2020). For example, Foerster et al. (2019) trained agents to play the card game Hanabi, and noted the emergent convention that “hinting for red or yellow indicates that the newest card of the other player is playable”.
One established characterization of a convention that is commonly used (Boutilier, 1996; Hawkins et al., 2017; Lerer & Peysakhovich, 2019) is an arbitrary solution to a recurring coordination problem (Lewis, 1969). A convention is thus one of many possible solutions that a group of partners happens to converge to. This is in contrast to problem solutions that are enforced by the rule constraints, and would have arisen no matter how the partners collaborated and what behavior they converged to. Success in a collaborative task typically involves learning both types of knowledge, which we will refer to as convention-dependent and rule-dependent behavior. The distinction between these two types of behavior has been studied extensively in the linguistic literature (Franke et al.; Brochhagen, 2020). In this work, we focus on the less-explored setting of implicit communication, and provide a concrete approach to learn representations for these two different types of behavior.
In the context of multi-agent or human-AI collaboration, we would like our AI agents to adapt quickly to new partners. AI agents should be able to flexibly adjust their partner-specific convention-
dependent behavior while reusing the same rule-dependent behavior to simplify the learning problem – just like how humans quickly adapt when playing basketball with a new friend, without the need to re-learn the rules of the game. We would also like our AI agents to coordinate well on similar tasks when paired with the same partners – just like how humans can coordinate better when playing a new sport but with an old friend.
Although many existing works similarly recognize conventions as an important factor of successful collaboration, they do not focus on separating conventions from rules of the task. This means that adapting to a new partner can be as difficult as learning a new task altogether. For example, existing techniques that emphasize modeling the partner’s policy, such as theory of mind (Simon, 1995; Baker et al., 2017; Brooks & Szafir, 2019) or multi-agent reinforcement learning (Foerster et al., 2018), attempt to model everything about the agent’s belief of the partner’s state and policies. Such belief modeling approaches very quickly become computationally intractable, as opposed to solely focusing on the relevant conventions developed with a partner.
To address the challenges above, we propose a framework that explicitly separates conventiondependent representations and rule-dependent representations through repeated interactions with multiple partners. After many rounds of solving a task (e.g. playing basketball) with different partners, an AI agent can learn to distinguish between conventions formed with a specific partner (e.g. pointing down signals bounce pass) and intrinsic complexities of the task (e.g. dribbling). This enables us to leverage the representations separately for fast adaptation to new interactive settings.
In the rest of this paper, we formalize the problem setting, and describe the underlying model of partners and tasks. Next, we present our framework for learning separations between rule and convention representations. We show that, under some conditions, our rule representation learns a sufficient statistic of the distribution over best-response strategies. We then run a study on humanhuman interactions to test if our hypothesis – that partners can carry over the same conventions across tasks – indeed holds for human subjects. Finally, we show the merits of our method on 3 collaborative tasks: contextual bandit, block placing, and a small scale version of Hanabi (Bard et al., 2020).
2 RELATED WORK
Convention formation has been studied under the form of iterated reference games (Hawkins et al., 2017; 2019), and language emergence (Mordatch & Abbeel, 2018; Lazaridou et al., 2017). In these works, partners learn to reference objects more efficiently by forming conventions through repeated interactions. But in these tasks, there is little intrinsic task difficulty beyond breaking symmetries.
In multi-agent reinforcement learning, techniques such as self-play, cross-training, or opponent learning (Foerster et al., 2018) have been used to observe emergent behavior in complex, physical settings (Nikolaidis & Shah, 2013; Liu et al., 2019; Baker et al., 2020). In addition, convention formation has been shown in tasks like Hanabi (Foerster et al., 2019; Hu & Foerster, 2019), Overcooked (Carroll et al., 2019; Wang et al., 2020), and negotiation tasks (Cao et al., 2018), where agents learn both how to solve a task and how to coordinate with a partner. But, these works qualitatively analyze emergent conventions through post-hoc inspection of the learned policies, and do not learn representations
that separate conventions from rule-dependent behavior. This makes it difficult to leverage learned conventions when adapting to new partners. More recently, an approach was proposed to design agents that avoid relying on conventions altogether; instead, the agents aim for unambiguous but potentially sub-optimal strategies (Hu et al., 2020).
Other approaches for studying conventions include game theory (Lerer & Peysakhovich, 2019) and theory of mind (Simon, 1995; Rabinowitz et al., 2018). Pragmatic reasoning (Goodman & Frank, 2016) is another framework that aims to estimate beliefs over the partner’s states and policies, but all these frameworks that rely on modelling beliefs quickly become intractable. Instead of approximating these belief modelling frameworks (Monroe & Potts, 2015), we instead focus on learning representations for rules and conventions. Our work shares similarities with the paradigms of meta-learning (Schmidhuber, 1987; Finn et al., 2017) or multi-task learning (Caruana, 1997), if we view collaboration with different partners as new but related tasks. However, we are also interested in how conventions persist across tasks with similar symmetries.
More related to our work is perhaps modular policy networks (Devin et al., 2017), which learns policy modules and composes them together to transfer across robot arms with different degrees of freedom and across different tasks. However, their approach relies on input cleanly split between background observations of the task, and robot’s internal observations. We do not assume such a split in the input; our approach tries to learn the separation between rules and partners with one input channel.
3 PRELIMINARIES
We begin by formally defining our problem setting as a two-player Markov Decision Process (MDP).
Two-Player MDP We consider a two-agent MDP with identical payoffs, which is a tuple {S, {Ae, Ap}, P,R}. S is a set of states, Ae is a set of actions for agent e, Ap is a set of actions for agent p. In general, e represents the ego agent (that we control), and p represents agents who partner with e. P : S × Ae × Ap × S → [0, 1] is the probability of reaching a state given the current state and actions of all agents, and R : S × S → R is the real-valued reward function. Since we are interested in repeated interactions, we consider finite-horizon MDPs. A policy π is a stochastic mapping from a state to an action. For our setting, our policy π = (πe, πp) has two parts: πe : S → Ae and πp : S → Ap, mapping the state into actions for agent e and p respectively. We also consider collaborative tasks that are partially observable in our experiments. In addition, some tasks are turn-based, which can be handled by including the turn information as part of the state, and ignoring the other player’s actions. Here, we focus on tasks with discrete state and action spaces.
Running Example: Collaborative Contextual Bandit We describe a collaborative task to ground our discussion around adapting to new partners, and adapting to new tasks with old partners. Consider a contextual bandit setting with 2 contexts, 4 actions, and a prize for each action. The two players each independently pick an arm to pull, and they score prize(a) points if they both picked the same arm a; otherwise, they score 0 points. An example is depicted in Figure 2, where green boxes represent arms with prize 1, the rest have prize 0, and red and blue circles show the player’s actions.
In Task 1 of Figure 2, context A only has one good arm a1 while the context B has two good arms a2 and a4. After repeated rounds of playing, two agents can eventually converge to coordinating on a convention that chooses one of the two good arms in context B (e.g. selecting the leftmost good arm a2). When the task shifts slightly as shown in Task 2 but context B remains the same across the two
tasks, we can reasonably expect the partners to adhere to the same convention they developed for context B of Task 1 when playing context B of Task 2.
There are two axes of generalization at play. First, for a fixed task we would like to learn the underlying structure of the task (e.g. green arm locations) to quickly adapt to a new partner when playing the same task. Second, for a fixed partner we would like to keep track of developed conventions to quickly coordinate with this partner on a new task. In the next sections, we define the notion of a new task and a new partner, and propose a framework to effectively generalize across both dimensions.
4 MODEL OF PARTNERS AND TASKS
We consider a family of tasks that share the same state space, action space, and transition dynamics (all of which we refer to as the domain), but can differ in the reward function of the underlying MDP. We also consider a family of partners that our ego agent would like to coordinate with. The underlying model governing the actions taken by a partner at a state of a given task can be seen in Figure 3. For a new task, a new reward function is sampled from θR, while the domain of the MDP is fixed. For a new partner, we sample from θρ a new function ρ, which maps the Q-function at a state of the task to an action. We refer to ρ as the convention of a partner. In other words, the action aits of a partner i at state s of task t depends on the partner’s conventions and (indirectly through the Q-function at state s) the reward of the task. θρ is the distribution over conventions and can correspond to, for example, the distribution over emergent conventions from AI-agents trained from selfplay, or can be derived from human partners.
More formally, for a fixed domain let QRs (ap) = maxae Q
R(s, ae, ap), where QR is the optimal Q-function for the two player MDP with reward R (Boutilier, 1996; Oliehoek et al., 2008). In other words, QRs is the Q-function from the partner’s perspective at state s in the task with reward R assum-
ing best response by the ego agent. Then, the convention of a partner ρ : S×R|Ap| → Ap determines the partner’s action at state s, given by ρi(s,QRs ). For example, we might expect the distribution of actions across different partners at a state s to follow Boltzmann rationality (Ziebart et al., 2008): Eρ∼θρ [1[ρ(s,QRs ) = ap]] ∝ exp(QRs (ap)). Lastly, we assume that behavior at different states are uncorrelated: a choice of action at one state tells us nothing about actions at other states.
Given this formulation, we can hope to learn the underlying structure of a task by playing with a number of partners on the same task. For example, if many sampled partners all make the same action ap at a state s, it is likely that QRs (ap) is much higher than the second best action, i.e., the optimal action is dependent on the rules of the task as opposed to the arbitrariness of partner strategies. Therefore a new partner will likely take action ap as well. Additionally, when coordinating with the same partner across different tasks, we can expect to see developed conventions persist across states with similar Q-functions. In particular, if QR1s = Q R2 s for two different tasks with reward functions R1 and R2, then ρi(s,QR1s ) = ρ i(s,QR2s ), so a partner will take the same action at state s across the two tasks (e.g. context B in Figure 2 across Task 1 and 2).
We note that the roles of partners and tasks in our model are asymmetrical, assuming rational partners. A task can completely determine the actions of all partners at states with one good action (e.g. context A in Figure 2), whereas a partner cannot blindly pick the same action across all tasks. This asymmetry is reflected in our learning framework and our setup of adapting to new partners and new tasks.
5 LEARNING RULES AND CONVENTIONS
With the goal of adapting to new partners and tasks in mind, we propose a framework aimed at separating the rule representations associated with a task from the convention representations
associated with a partner. At training time, we assume access to a task and a set of partners sampled from the same partner distribution θρ. When adapting to a new partner sampled from θρ, we would like to learn a new convention representation but keep the learned rule representation. When adapting to a new task with the original partners, we would like the opposite: to learn a new rule representation but keep the learned convention representation. In our setup, the ego agent knows the identity of the tasks and partners, and the goal is to maximize the total reward with a new task and partner (the combination of which may be new to the ego agent). Naturally, this points us towards a modular architecture that enables integrating new combinations of partner and task representations.
We propose the architecture shown in Figure 4, where we learn to play with each of the training partners via reinforcement learning. Each rectangular module represents a multi-layer perceptron, and each box with grey bars represents a distribution over actions in the action space of the MDP. When training with multiple partners on the same task, the policy network uses one single shared task module gt, and uses one partner module gpi for each partner i. The task module takes the state observations s as input, and outputs latent variables z and an action distribution gt(a|s). Then, each partner module takes z as input, and outputs an action distribution gpi (a|z). Here, g p i (a|z) does not represent partner i’s action distribution, but rather represents our ego agent’s action distribution in response to partner i. The policy πi of the policy network for partner i is set to be proportional to the product of the action distributions.
πi(a|s) = 1
Zi gt(a|s) · gpi (a|z)
As it stands, many possible combinations of task and partner module outputs could lead to optimal action distributions πi(a|s)∀i. To encourage the task/partner modules to learn the rule/convention representations respectively, we include a regularization term that pushes gt(a|s) to be the marginal best-response strategy over partners at state s. In practice, this amounts to minimizing the Wasserstein distance between the task module’s output and the average best-response strategies πi.
D(s) = ∑ a∈A ∣∣∣∣∣gt(a|s)− 1n∑ i πi(a|s) ∣∣∣∣∣ By pushing the task module gt to learn the marginal best-response strategy, we can cleanly capture the rule-dependent representations of the task, to be used when adapting to a new partner. In fact, under some assumptions, we can show that gt is learning the optimal representation, as it is a sufficient statistic of the distribution over best-response strategies across possible partners.
In particular, for a fixed task let f(s, ae, ap) represent the probability of the ego agent taking action ae using its best-response strategy to partner action ap at state s. So, ∑ ae f(s, ae, ap) = 1. We say the ego agent is deterministic if it can only take on deterministic strategies: ∀s, ap ∃ ae : f(s, ae, ap) = 1. Lemma 1. For a deterministic ego agent, the marginal best-response strategy across partners is a sufficient statistic of the distribution of best-response strategies across partners.
Next, we lay out the learning procedure in more detail in Algorithm 1. We use the task module gt and the partner module gpi when playing with partner i. We push the task module to learn the marginal best-response action by adding the loss term D on top of a standard policy gradient (PG) loss term.
Algorithm 1: Learning Separate Representations for Partners and Tasks Input: A MDP M , n partners ρ1 . . . ρn ∼ θρ Output: Policy network modules for adapting to new partners from θρ and new tasks
1 Initialize the task module gt and n partner modules gp1 , . . . , g p n 2 G,T ← {gt, gp1 , . . . , gpn}, number of iterations 3 for j ← 1, T do 4 for i← 1, n do 5 Collect rollouts τ = (s,a, r) using {gt, gpi } with partner ρi on M 6 Update G with loss LPG(gt, gpi , τ) + λEs[D(s)]
Return: G
Adapting to new partner When adapting to a new partner, we reuse the task module gt along with a reinitialized partner module, and train both modules. At states with only one optimal action a? regardless of the partner’s convention, the marginal best-response strategy will concentrate on a?. As such, the task module gt will immediately push the ego agent to choose a?, improving adaptation time. In contrast, at states with many possible optimal actions depending on the partner’s conventions, the marginal best-response strategy will spread the action distribution among the possible optimal actions, allowing the ego agent to explore amongst the optimal options with the new partner.
Coordinating on a new task When faced with a new task, we would like to recognize the parts of the task where conventions with old partners on an old task carry over. In particular, in states for which the joint Q-function has not changed, our partners models will take the same actions as before, and our ego agent can employ the same response as before. With this in mind, suppose we have trained a task module gt and a set of 2n partner modules gp1 , . . . , g p 2n with a set of partners on an old task. We can learn to coordinate on a new task with partners 1 to n by fine-tuning the task module gt, paired with partner modules gp1 , . . . , g p n with frozen parameters. At states where the joint Q-function has not changed, the same latent variable outputs z will work well for all partners, so the task module will output the same z and the actions of the ego agent will not change. Then, we can coordinate with partners n+ 1 to 2n in a zero-shot manner by pairing gt with partner modules gpn+1, . . . , g p 2n.
5.1 UNDERSTANDING CONVENTIONS
When adapting to a new task with the same partner, we are relying on the hypothesis that our partners will carry over the same conventions developed on an earlier task, at states where the Q-function has not changed. Here, we would like to test this hypothesis and study if and when conventions persist across tasks, through a study of human-human interactions in the collaborative contextual bandit task.
User Study: Conventions in Human-Human Interactions. We conducted a study with 23 pairs of human partners (46 subjects), using the collaborative contextual bandit task from Section 3. The study was conducted via an online interface, with partners in separate rooms. The actions of partners are revealed to each other at the end of each try, and the partners had no other form of communication.
- Independent Variables: We vary the prize of the arms at different contexts, i.e., which arms are good to coordinate on. Pairs of partners were given 5 tries to coordinate on 3 different contexts (as shown
in Fig. 5). We then ask the same pair of partners to coordinate on the Test task with 3 new contexts. - Dependent Measures: We measure the score of each pair’s first try at each context. The maximum score in each run for each context is 1, corresponding to the two partners picking the same good arm. - Hypothesis: Our hypothesis is that partners can coordinate well on the test task by carrying over over conventions developed on the train task. - Results: On the left of Figure 5 we plot the average score of the first try (zero-shot) of our participants at solving each context in each task. In the Train task (grey bars), the partner pairs are coordinating for the first time. As expected, they generally had no trouble coordinating on Train-A (since there is only 1 good option), but scored lower on Train-B and Train-C (since there are 2 good options each).
Immediately following the 5 trials on the training task, the partner pairs coordinated on each context of the Test task (orange bars in Fig. 5), to see if they can leverage developed conventions on a new task. On Test-A, they did well since there is only one good option. However, they coordinated much better on Test-C compared to Test-B, even though both contexts have two good options. The main difference between the two contexts is that Test-C shares the same set of optimal actions (a2 and a4) as Train-C, whereas Test-B does not share the same set of optimal actions as Train-B.
Overall, this study suggests that human partners can successfully carry over conventions developed through repeated interactions with their partners, and further coordinate better when carrying over conventions across tasks at states where the set of optimal actions are similar (e.g. the higher gap for same context such as context C across the train and test tasks).
6 EXPERIMENTS
We experiment with our approach on three coordination tasks: collaborative contextual bandit, collaborative block placing, and Hanabi. We show results on adaptation to new partners for a fixed task, and zero-shot coordination with same partners on a new task. We compare with 1) a baseline (BaselineAgg) that learns a shared policy network to play with all training partners, using the average of the gradients with each partner and 2) a baseline (BaselineModular) that similarly uses a modular policy approach with separate modules for each partner, but does not explicitly represent the marginal best-response strategy, and 3) First-Order MAML (Nichol et al., 2018). For our method, we vary λ from 0.0 to 0.5: a higher value pushes the task module output to more closely match the marginal action distribution. We also test the use of a low dimensional z (the interface between the task and partner module). In general, we find that our method with regularization performs the best, and that it is unclear if using a low dimensional z is helpful. The partners in our experiments were either generated via self-play or were hand-designed to use varying conventions.
Contextual Bandit We study the collaborative contextual bandit task described in Section 3, using 4 contexts and 8 arms. We study variations of the task by altering the number of contexts with more than one good option (i.e. symmetries). We write “Arms m” to refer to a task having m contexts with symmetries – coordination should be easier for tasks having fewer contexts with symmetries.
In Figure 6a-c we show results on adaptation to new hand-designed partners (Figure 11 for self-play partners), varying the number of contexts with symmetries. We also plot the green oracle curve, by using our learning framework along with oracle best-response marginals (assuming uniform distribution over optimal actions across partners). To see if our method learns the correct marginals, we measure the Wasserstein distance between the learned marginals and the oracle marginals, in Figure 7. Using λ = 0.5 produces task module outputs that closely match the oracle marginals. To test zero-shot coordination with the same partners on new tasks, we tweak the task by altering contexts with only one good action, keeping the contexts with symmetries the same (similar to the change in Figure 2). We use hand-designed partners to make sure that partners use the same conventions at unchanged contexts. In Figure 10 we see that our method outperforms BaselineModular.
We include an additional plot in Figure 6d, where we instead use the Train task from the user study as shown in Figure 5. We use the data collected in our user study to create the partner policies, in lieu of the hand-designed partners. We observe similar trends – that the modular learning framework with marginal regularization performs the best.
Block Placing Next, we study a collaborative block placing task which involves a 2× 2 goal-grid, with a single red block and a single blue block on the grid. The goal is to reconstruct the goal-grid from a blank working-grid, with partner 1 (P1) only controlling a red block and partner 2 (P2) only controlling a blue block. The partners take turns moving their blocks on the working-grid, and can see each other’s actions on the working-grid. However, only P1 can see the goal-grid configuration; P2 only receives information from the working-grid. Each game lasts 6 turns, with P1 taking the first turn. Specifically, on P1’s turn, it can move the red block to one of the four cells, remove it from the grid, or pass the turn (similarly for P2 / blue block). Blocks cannot be placed on top of each other. The partners share rewards, and earn 10 points for each correctly placed block, for a maximum of 20 points per game. P1 needs to both place its own red block correctly, and communicate the position of the blue block to P2. As such, success in this task requires a mixture of rule-dependent and convention-dependent skills.
In Figures 9a and 9b we plot the results on adaptation to new hand-designed and self-play partners, where our method outperforms the baselines for non-zero values of λ. To verify that our task module is learning a good task representation, we compute the distance between the task module output and the oracle marginal (Figure 7). We see that our approach with λ = 0.5 leads to a smallest gap from the oracle marginals. Finally, similar to the contextual bandit task, we test zero-shot coordination with the same partners on new tasks in Figure 10. We tweak the task by changing the colors of target blocks, and use hand-designed partners to ensure the developed conventions persist. Again, our method performs better on the new task/partner combination than BaselineModular. We provide more details of the experimental setup in the Appendix.
Hanabi Finally, we experiment on a light version of the collaborative card game Hanabi, which has been studied in many related works in MARL. We focus on the two-player version of Hanabi, with 1 color and 5 ranks (for a maximum score of 5). Since it is not straightforward how to create a
variety of hand-designed partners, or tweak the task while maintaining the same symmetries, we use self-play partners only and omit experiments on coordinating on a new task with the same partners. The results on adaptation to new self-play partners are shown in Figure 9c.
7 CONCLUSION
We study the problem of adapting to new settings in multi-agent interactions. To develop AI-agents that quickly adapt to new partners and tasks, we introduced a framework that learns rule-dependent and convention-dependent representations. Our AI agents can adapt to conventions from new partners without re-learning the full complexities of a task, and can carry over existing conventions with same partners on new tasks. We run a study of human-human interactions that suggests human conventions persist across tasks with similar symmetries. Our experiments show improvements in adapting to new settings on a contextual bandit task, a block placing task, and Hanabi.
ACKNOWLEDGMENTS
This work was supported by NSF (#1941722, #2006388), and the DARPA HICON-LEARN project.
A PROOF OF LEMMA 1
Proof. The marginal of the best-response strategy across partners at state s of a task with reward R can be written as: p(ae) = Eρ∼θρ [f(s, ae, ρ(s,QRs ))] When the ego agent is deterministic, its best-response strategy f(s, ae, ap) can be rewritten as a function f ′ where f ′(s, ap) = ae if f(s, ae, ap) = 1. Then, we can write the likelihood of a best-response strategy from a newly sampled partner as a function of the marginal:
Pρ∼θρ [f ′(s, ρ(s,QRs )) = ae] = p(ae)
Loosely speaking, when the best-response strategies are not deterministic, storing only their marginals throws away information. But in the case when the strategies are deterministic, each strategy can be represented by a single action, so we are simply showing that storing their marginals does not throw away information.
B ARCHITECTURE DETAILS AND HYPERPARAMETERS
The specific size of each module in Figure 4 is shown in Table 1. We train the policy network using Proximal Policy Optimization (PPO) (Schulman et al., 2017) with the Stable Baselines software package (Raffin et al., 2019). For the state-value function, we use a network of the same size, and combine the value predictions from the task module and the partner module by adding them together.
C EXPERIMENTAL SETUP
Code for the experiments in our paper is available at https://github.com/ Stanford-ILIAD/Conventions-ModularPolicy.
C.1 CONTEXTUAL BANDIT
Task setup We use a contextual bandit with 4 contexts and 8 arms. We label the contexts 1, . . . , 4 and the arms 1, . . . , 8. In the initial setup, at context i the optimal actions (green boxes) are arms i
and i+ 4. So, all 4 contexts have symmetries, since there are multiple equally optimal actions. We experiment with different versions of the task by changing the number of contexts with symmetries, from 4 down to 1. To get m contexts with symmetries, we set the optimal action of a context i to only arm i, if i > m. So, contexts [1, . . . ,m] have two equally optimal actions, and contexts [m+1, . . . , 4] have only one optimal action. Naturally, when there are fewer contexts with symmetries, coordination is easier.
Partner setup We use both partners obtained by self-play and hand-designed partners. For the selfplay partners, we train using PPO with the hyperparameters shown in Table 2. For the hand-designed partners, the partners were designed to pick any one of the equally optimal actions and to stick to the same choice. The distribution over choices across the hand-designed partners was made to be uniform over the optimal actions. We used a set of 10 self-play partners and 4 hand-designed partners for training, and a set of the same size of testing.
Tweaked task For coordinating on a new task (Figure 10), we change the optimal action for contexts where there no symmetries. As such, this is only possible when the number of symmetries m is less than 4. For the new task, a context i where i > m, we modify the optimal action from i to i+ 4. The contexts with symmetries remain unchanged. For the hand-designed partners, we ensured that they stuck to the same choices (conventions) as the initial task in the contexts with symmetries.
C.2 BLOCK PLACING
Task setup We used the task setup described in Section 6. In our experiments, the ego agent is P1 and the partner agents are P2.
Partner setup We use both partners obtained by self-play and hand-designed partners. For the self-play partners, we train using PPO with the hyperparameters shown in Table 2. For the handdesigned partners, we design the partners to interpret the first action (turn 1) of the ego agent based on a fixed permutation of the block positions. For example, a possible partner may always place the blue block top-right whenever the ego agent places the red block top-left on turn 1. We used a set of 6 self-play partners and 3 hand-designed partners for training, and a set of 6 self-play partners and 6 hand-designed partners for testing.
Tweaked task For coordinating on a new task (Figure 10), we change the reward function by reversing the interpretation of the red and white cells in the goal grid. In other words, P1 has to place the red block on the working grid in one of the two positions that is a white cell on the goal grid. P2’s goal is unchanged; P2 still has to place the blue block where the blue block appears in the goal grid, so the relevant symmetries remain unchanged. For the hand-designed partners, we ensured that they stuck to the same permutations (conventions) as the initial task when interpreting the ego agent’s action on turn 1.
C.3 HANABI
Task setup We used the Hanabi Learning Environment package (Bard et al., 2020), with the following configuration: 1 color, 5 ranks, 2 players, hand size 2, 3 information tokens, and 3 life tokens. The maximum score is 5 points.
Partner setup We use only partners obtained by self-play. For the self-play partners, we train using PPO with the hyperparameters shown in Table 2. We used a set of 4 self-play partners for training, and a set of the same size for testing.
C.4 ADDITIONAL PLOTS
C.5 USER STUDY INDIVIDUAL RESULTS
Here we provide more detailed results from our user study. For each context and task we report the joint actions made by each pair of users on their first try and on their fifth try. | 1. What is the focus of the paper regarding human-AI collaboration using deep neural networks?
2. What are the strengths of the proposed method, particularly in its ability to separate learning the rules of the environment from the conventions used to coordinate with humans?
3. What are the weaknesses of the approach, especially when considering the fact that humans will typically adapt to whatever policy the robot plays?
4. How might the method extend to collaboration with adaptive humans?
5. Do you have any questions or suggestions regarding the implementation or application of the proposed method? | Review | Review
This paper proposes that in human-AI collaboration using deep neural nets, the AI agents we train should separate learning the rules of the environment from the conventions used to coordinate with humans in that environment. It proposes a simple method to do so: learn a single task-specific module that is always used, as well as many partner-specific modules that are used with specific partners. Intuitively, when trained with multiple partners, the task-specific module should learn heuristics that work across partners (the environment-specific rules) while the partner-specific model should learn personalized heuristics for each partner (the conventions). They further incentivize this by regularizing the task-specific module towards the average of the partner policies.
Quality: The one qualm I have is that the environments studied are relatively simple (though even 1-color Hanabi is a fairly challenging coordination problem). Other than that the paper is high quality. I especially appreciated the user study.
Clarity: I found the paper reasonably clear, though some parts took some time to understand.
Originality: To my knowledge, this is the first paper exploring the distinction between rules and conventions within the deep RL paradigm, and it shows good results both in simulation and with real humans in a user study (albeit in a very simple toy domain).
Significance: Human-AI collaboration is clearly important and significant, and the application of deep RL to human-AI collaboration has grown in the last 2-3 years. This paper extends this field with an important contribution.
The main weakness of the approach I see is that it doesn’t have an obvious way to handle the fact that humans will typically adapt to whatever policy the robot plays. This may not happen in the simple environments considered in this paper, but definitely does happen in larger environments. Perhaps this technique would work anyway: arguably, an adaptive human just means that the convention changes, and simply continuing to train the partner module could be enough for the robot to adapt to this change in the human’s convention.
Regardless, I think even the contribution of how to deal with multiple different non-adaptive humans is significant and relevant to ICLR.
Questions for the authors:
Why does the partner module operate “on top of” the task module? Why not instead have both modules take in the state as an input and produce a probability distribution over actions, that are then multiplied together?
How might this extend to collaboration with adaptive humans?
Typos / nitpicks:
The phrase “ego agent” was confusing to me, and I think it wasn’t explained anywhere? I did eventually figure it out though. |
ICLR | Title
On the Critical Role of Conventions in Adaptive Human-AI Collaboration
Abstract
Humans can quickly adapt to new partners in collaborative tasks (e.g. playing basketball), because they understand which fundamental skills of the task (e.g. how to dribble, how to shoot) carry over across new partners. Humans can also quickly adapt to similar tasks with the same partners by carrying over conventions that they have developed (e.g. raising hand signals pass the ball), without learning to coordinate from scratch. To collaborate seamlessly with humans, AI agents should adapt quickly to new partners and new tasks as well. However, current approaches have not attempted to distinguish between the complexities intrinsic to a task and the conventions used by a partner, and more generally there has been little focus on leveraging conventions for adapting to new settings. In this work, we propose a learning framework that teases apart rule-dependent representation from convention-dependent representation in a principled way. We show that, under some assumptions, our rule-dependent representation is a sufficient statistic of the distribution over best-response strategies across partners. Using this separation of representations, our agents are able to adapt quickly to new partners, and to coordinate with old partners on new tasks in a zero-shot manner. We experimentally validate our approach on three collaborative tasks varying in complexity: a contextual multi-armed bandit, a block placing task, and the card game Hanabi.
1 INTRODUCTION
Humans collaborate well together in complex tasks by adapting to each other through repeated interactions. What emerges from these repeated interactions is shared knowledge about the interaction history. We intuitively refer to this shared knowledge as conventions. Convention formation helps explain why teammates collaborate better than groups of strangers, and why friends develop lingo incomprehensible to outsiders. The notion of conventions has been studied in language (Clark & Wilkes-Gibbs, 1986; Clark, 1996; Hawkins et al., 2017; Khani et al., 2018) and also alluded to in more general multiagent collaborative tasks (Boutilier, 1996; Stone et al., 2010; Foerster et al., 2019; Carroll et al., 2019; Lerer & Peysakhovich, 2019; Hu et al., 2020). For example, Foerster et al. (2019) trained agents to play the card game Hanabi, and noted the emergent convention that “hinting for red or yellow indicates that the newest card of the other player is playable”.
One established characterization of a convention that is commonly used (Boutilier, 1996; Hawkins et al., 2017; Lerer & Peysakhovich, 2019) is an arbitrary solution to a recurring coordination problem (Lewis, 1969). A convention is thus one of many possible solutions that a group of partners happens to converge to. This is in contrast to problem solutions that are enforced by the rule constraints, and would have arisen no matter how the partners collaborated and what behavior they converged to. Success in a collaborative task typically involves learning both types of knowledge, which we will refer to as convention-dependent and rule-dependent behavior. The distinction between these two types of behavior has been studied extensively in the linguistic literature (Franke et al.; Brochhagen, 2020). In this work, we focus on the less-explored setting of implicit communication, and provide a concrete approach to learn representations for these two different types of behavior.
In the context of multi-agent or human-AI collaboration, we would like our AI agents to adapt quickly to new partners. AI agents should be able to flexibly adjust their partner-specific convention-
dependent behavior while reusing the same rule-dependent behavior to simplify the learning problem – just like how humans quickly adapt when playing basketball with a new friend, without the need to re-learn the rules of the game. We would also like our AI agents to coordinate well on similar tasks when paired with the same partners – just like how humans can coordinate better when playing a new sport but with an old friend.
Although many existing works similarly recognize conventions as an important factor of successful collaboration, they do not focus on separating conventions from rules of the task. This means that adapting to a new partner can be as difficult as learning a new task altogether. For example, existing techniques that emphasize modeling the partner’s policy, such as theory of mind (Simon, 1995; Baker et al., 2017; Brooks & Szafir, 2019) or multi-agent reinforcement learning (Foerster et al., 2018), attempt to model everything about the agent’s belief of the partner’s state and policies. Such belief modeling approaches very quickly become computationally intractable, as opposed to solely focusing on the relevant conventions developed with a partner.
To address the challenges above, we propose a framework that explicitly separates conventiondependent representations and rule-dependent representations through repeated interactions with multiple partners. After many rounds of solving a task (e.g. playing basketball) with different partners, an AI agent can learn to distinguish between conventions formed with a specific partner (e.g. pointing down signals bounce pass) and intrinsic complexities of the task (e.g. dribbling). This enables us to leverage the representations separately for fast adaptation to new interactive settings.
In the rest of this paper, we formalize the problem setting, and describe the underlying model of partners and tasks. Next, we present our framework for learning separations between rule and convention representations. We show that, under some conditions, our rule representation learns a sufficient statistic of the distribution over best-response strategies. We then run a study on humanhuman interactions to test if our hypothesis – that partners can carry over the same conventions across tasks – indeed holds for human subjects. Finally, we show the merits of our method on 3 collaborative tasks: contextual bandit, block placing, and a small scale version of Hanabi (Bard et al., 2020).
2 RELATED WORK
Convention formation has been studied under the form of iterated reference games (Hawkins et al., 2017; 2019), and language emergence (Mordatch & Abbeel, 2018; Lazaridou et al., 2017). In these works, partners learn to reference objects more efficiently by forming conventions through repeated interactions. But in these tasks, there is little intrinsic task difficulty beyond breaking symmetries.
In multi-agent reinforcement learning, techniques such as self-play, cross-training, or opponent learning (Foerster et al., 2018) have been used to observe emergent behavior in complex, physical settings (Nikolaidis & Shah, 2013; Liu et al., 2019; Baker et al., 2020). In addition, convention formation has been shown in tasks like Hanabi (Foerster et al., 2019; Hu & Foerster, 2019), Overcooked (Carroll et al., 2019; Wang et al., 2020), and negotiation tasks (Cao et al., 2018), where agents learn both how to solve a task and how to coordinate with a partner. But, these works qualitatively analyze emergent conventions through post-hoc inspection of the learned policies, and do not learn representations
that separate conventions from rule-dependent behavior. This makes it difficult to leverage learned conventions when adapting to new partners. More recently, an approach was proposed to design agents that avoid relying on conventions altogether; instead, the agents aim for unambiguous but potentially sub-optimal strategies (Hu et al., 2020).
Other approaches for studying conventions include game theory (Lerer & Peysakhovich, 2019) and theory of mind (Simon, 1995; Rabinowitz et al., 2018). Pragmatic reasoning (Goodman & Frank, 2016) is another framework that aims to estimate beliefs over the partner’s states and policies, but all these frameworks that rely on modelling beliefs quickly become intractable. Instead of approximating these belief modelling frameworks (Monroe & Potts, 2015), we instead focus on learning representations for rules and conventions. Our work shares similarities with the paradigms of meta-learning (Schmidhuber, 1987; Finn et al., 2017) or multi-task learning (Caruana, 1997), if we view collaboration with different partners as new but related tasks. However, we are also interested in how conventions persist across tasks with similar symmetries.
More related to our work is perhaps modular policy networks (Devin et al., 2017), which learns policy modules and composes them together to transfer across robot arms with different degrees of freedom and across different tasks. However, their approach relies on input cleanly split between background observations of the task, and robot’s internal observations. We do not assume such a split in the input; our approach tries to learn the separation between rules and partners with one input channel.
3 PRELIMINARIES
We begin by formally defining our problem setting as a two-player Markov Decision Process (MDP).
Two-Player MDP We consider a two-agent MDP with identical payoffs, which is a tuple {S, {Ae, Ap}, P,R}. S is a set of states, Ae is a set of actions for agent e, Ap is a set of actions for agent p. In general, e represents the ego agent (that we control), and p represents agents who partner with e. P : S × Ae × Ap × S → [0, 1] is the probability of reaching a state given the current state and actions of all agents, and R : S × S → R is the real-valued reward function. Since we are interested in repeated interactions, we consider finite-horizon MDPs. A policy π is a stochastic mapping from a state to an action. For our setting, our policy π = (πe, πp) has two parts: πe : S → Ae and πp : S → Ap, mapping the state into actions for agent e and p respectively. We also consider collaborative tasks that are partially observable in our experiments. In addition, some tasks are turn-based, which can be handled by including the turn information as part of the state, and ignoring the other player’s actions. Here, we focus on tasks with discrete state and action spaces.
Running Example: Collaborative Contextual Bandit We describe a collaborative task to ground our discussion around adapting to new partners, and adapting to new tasks with old partners. Consider a contextual bandit setting with 2 contexts, 4 actions, and a prize for each action. The two players each independently pick an arm to pull, and they score prize(a) points if they both picked the same arm a; otherwise, they score 0 points. An example is depicted in Figure 2, where green boxes represent arms with prize 1, the rest have prize 0, and red and blue circles show the player’s actions.
In Task 1 of Figure 2, context A only has one good arm a1 while the context B has two good arms a2 and a4. After repeated rounds of playing, two agents can eventually converge to coordinating on a convention that chooses one of the two good arms in context B (e.g. selecting the leftmost good arm a2). When the task shifts slightly as shown in Task 2 but context B remains the same across the two
tasks, we can reasonably expect the partners to adhere to the same convention they developed for context B of Task 1 when playing context B of Task 2.
There are two axes of generalization at play. First, for a fixed task we would like to learn the underlying structure of the task (e.g. green arm locations) to quickly adapt to a new partner when playing the same task. Second, for a fixed partner we would like to keep track of developed conventions to quickly coordinate with this partner on a new task. In the next sections, we define the notion of a new task and a new partner, and propose a framework to effectively generalize across both dimensions.
4 MODEL OF PARTNERS AND TASKS
We consider a family of tasks that share the same state space, action space, and transition dynamics (all of which we refer to as the domain), but can differ in the reward function of the underlying MDP. We also consider a family of partners that our ego agent would like to coordinate with. The underlying model governing the actions taken by a partner at a state of a given task can be seen in Figure 3. For a new task, a new reward function is sampled from θR, while the domain of the MDP is fixed. For a new partner, we sample from θρ a new function ρ, which maps the Q-function at a state of the task to an action. We refer to ρ as the convention of a partner. In other words, the action aits of a partner i at state s of task t depends on the partner’s conventions and (indirectly through the Q-function at state s) the reward of the task. θρ is the distribution over conventions and can correspond to, for example, the distribution over emergent conventions from AI-agents trained from selfplay, or can be derived from human partners.
More formally, for a fixed domain let QRs (ap) = maxae Q
R(s, ae, ap), where QR is the optimal Q-function for the two player MDP with reward R (Boutilier, 1996; Oliehoek et al., 2008). In other words, QRs is the Q-function from the partner’s perspective at state s in the task with reward R assum-
ing best response by the ego agent. Then, the convention of a partner ρ : S×R|Ap| → Ap determines the partner’s action at state s, given by ρi(s,QRs ). For example, we might expect the distribution of actions across different partners at a state s to follow Boltzmann rationality (Ziebart et al., 2008): Eρ∼θρ [1[ρ(s,QRs ) = ap]] ∝ exp(QRs (ap)). Lastly, we assume that behavior at different states are uncorrelated: a choice of action at one state tells us nothing about actions at other states.
Given this formulation, we can hope to learn the underlying structure of a task by playing with a number of partners on the same task. For example, if many sampled partners all make the same action ap at a state s, it is likely that QRs (ap) is much higher than the second best action, i.e., the optimal action is dependent on the rules of the task as opposed to the arbitrariness of partner strategies. Therefore a new partner will likely take action ap as well. Additionally, when coordinating with the same partner across different tasks, we can expect to see developed conventions persist across states with similar Q-functions. In particular, if QR1s = Q R2 s for two different tasks with reward functions R1 and R2, then ρi(s,QR1s ) = ρ i(s,QR2s ), so a partner will take the same action at state s across the two tasks (e.g. context B in Figure 2 across Task 1 and 2).
We note that the roles of partners and tasks in our model are asymmetrical, assuming rational partners. A task can completely determine the actions of all partners at states with one good action (e.g. context A in Figure 2), whereas a partner cannot blindly pick the same action across all tasks. This asymmetry is reflected in our learning framework and our setup of adapting to new partners and new tasks.
5 LEARNING RULES AND CONVENTIONS
With the goal of adapting to new partners and tasks in mind, we propose a framework aimed at separating the rule representations associated with a task from the convention representations
associated with a partner. At training time, we assume access to a task and a set of partners sampled from the same partner distribution θρ. When adapting to a new partner sampled from θρ, we would like to learn a new convention representation but keep the learned rule representation. When adapting to a new task with the original partners, we would like the opposite: to learn a new rule representation but keep the learned convention representation. In our setup, the ego agent knows the identity of the tasks and partners, and the goal is to maximize the total reward with a new task and partner (the combination of which may be new to the ego agent). Naturally, this points us towards a modular architecture that enables integrating new combinations of partner and task representations.
We propose the architecture shown in Figure 4, where we learn to play with each of the training partners via reinforcement learning. Each rectangular module represents a multi-layer perceptron, and each box with grey bars represents a distribution over actions in the action space of the MDP. When training with multiple partners on the same task, the policy network uses one single shared task module gt, and uses one partner module gpi for each partner i. The task module takes the state observations s as input, and outputs latent variables z and an action distribution gt(a|s). Then, each partner module takes z as input, and outputs an action distribution gpi (a|z). Here, g p i (a|z) does not represent partner i’s action distribution, but rather represents our ego agent’s action distribution in response to partner i. The policy πi of the policy network for partner i is set to be proportional to the product of the action distributions.
πi(a|s) = 1
Zi gt(a|s) · gpi (a|z)
As it stands, many possible combinations of task and partner module outputs could lead to optimal action distributions πi(a|s)∀i. To encourage the task/partner modules to learn the rule/convention representations respectively, we include a regularization term that pushes gt(a|s) to be the marginal best-response strategy over partners at state s. In practice, this amounts to minimizing the Wasserstein distance between the task module’s output and the average best-response strategies πi.
D(s) = ∑ a∈A ∣∣∣∣∣gt(a|s)− 1n∑ i πi(a|s) ∣∣∣∣∣ By pushing the task module gt to learn the marginal best-response strategy, we can cleanly capture the rule-dependent representations of the task, to be used when adapting to a new partner. In fact, under some assumptions, we can show that gt is learning the optimal representation, as it is a sufficient statistic of the distribution over best-response strategies across possible partners.
In particular, for a fixed task let f(s, ae, ap) represent the probability of the ego agent taking action ae using its best-response strategy to partner action ap at state s. So, ∑ ae f(s, ae, ap) = 1. We say the ego agent is deterministic if it can only take on deterministic strategies: ∀s, ap ∃ ae : f(s, ae, ap) = 1. Lemma 1. For a deterministic ego agent, the marginal best-response strategy across partners is a sufficient statistic of the distribution of best-response strategies across partners.
Next, we lay out the learning procedure in more detail in Algorithm 1. We use the task module gt and the partner module gpi when playing with partner i. We push the task module to learn the marginal best-response action by adding the loss term D on top of a standard policy gradient (PG) loss term.
Algorithm 1: Learning Separate Representations for Partners and Tasks Input: A MDP M , n partners ρ1 . . . ρn ∼ θρ Output: Policy network modules for adapting to new partners from θρ and new tasks
1 Initialize the task module gt and n partner modules gp1 , . . . , g p n 2 G,T ← {gt, gp1 , . . . , gpn}, number of iterations 3 for j ← 1, T do 4 for i← 1, n do 5 Collect rollouts τ = (s,a, r) using {gt, gpi } with partner ρi on M 6 Update G with loss LPG(gt, gpi , τ) + λEs[D(s)]
Return: G
Adapting to new partner When adapting to a new partner, we reuse the task module gt along with a reinitialized partner module, and train both modules. At states with only one optimal action a? regardless of the partner’s convention, the marginal best-response strategy will concentrate on a?. As such, the task module gt will immediately push the ego agent to choose a?, improving adaptation time. In contrast, at states with many possible optimal actions depending on the partner’s conventions, the marginal best-response strategy will spread the action distribution among the possible optimal actions, allowing the ego agent to explore amongst the optimal options with the new partner.
Coordinating on a new task When faced with a new task, we would like to recognize the parts of the task where conventions with old partners on an old task carry over. In particular, in states for which the joint Q-function has not changed, our partners models will take the same actions as before, and our ego agent can employ the same response as before. With this in mind, suppose we have trained a task module gt and a set of 2n partner modules gp1 , . . . , g p 2n with a set of partners on an old task. We can learn to coordinate on a new task with partners 1 to n by fine-tuning the task module gt, paired with partner modules gp1 , . . . , g p n with frozen parameters. At states where the joint Q-function has not changed, the same latent variable outputs z will work well for all partners, so the task module will output the same z and the actions of the ego agent will not change. Then, we can coordinate with partners n+ 1 to 2n in a zero-shot manner by pairing gt with partner modules gpn+1, . . . , g p 2n.
5.1 UNDERSTANDING CONVENTIONS
When adapting to a new task with the same partner, we are relying on the hypothesis that our partners will carry over the same conventions developed on an earlier task, at states where the Q-function has not changed. Here, we would like to test this hypothesis and study if and when conventions persist across tasks, through a study of human-human interactions in the collaborative contextual bandit task.
User Study: Conventions in Human-Human Interactions. We conducted a study with 23 pairs of human partners (46 subjects), using the collaborative contextual bandit task from Section 3. The study was conducted via an online interface, with partners in separate rooms. The actions of partners are revealed to each other at the end of each try, and the partners had no other form of communication.
- Independent Variables: We vary the prize of the arms at different contexts, i.e., which arms are good to coordinate on. Pairs of partners were given 5 tries to coordinate on 3 different contexts (as shown
in Fig. 5). We then ask the same pair of partners to coordinate on the Test task with 3 new contexts. - Dependent Measures: We measure the score of each pair’s first try at each context. The maximum score in each run for each context is 1, corresponding to the two partners picking the same good arm. - Hypothesis: Our hypothesis is that partners can coordinate well on the test task by carrying over over conventions developed on the train task. - Results: On the left of Figure 5 we plot the average score of the first try (zero-shot) of our participants at solving each context in each task. In the Train task (grey bars), the partner pairs are coordinating for the first time. As expected, they generally had no trouble coordinating on Train-A (since there is only 1 good option), but scored lower on Train-B and Train-C (since there are 2 good options each).
Immediately following the 5 trials on the training task, the partner pairs coordinated on each context of the Test task (orange bars in Fig. 5), to see if they can leverage developed conventions on a new task. On Test-A, they did well since there is only one good option. However, they coordinated much better on Test-C compared to Test-B, even though both contexts have two good options. The main difference between the two contexts is that Test-C shares the same set of optimal actions (a2 and a4) as Train-C, whereas Test-B does not share the same set of optimal actions as Train-B.
Overall, this study suggests that human partners can successfully carry over conventions developed through repeated interactions with their partners, and further coordinate better when carrying over conventions across tasks at states where the set of optimal actions are similar (e.g. the higher gap for same context such as context C across the train and test tasks).
6 EXPERIMENTS
We experiment with our approach on three coordination tasks: collaborative contextual bandit, collaborative block placing, and Hanabi. We show results on adaptation to new partners for a fixed task, and zero-shot coordination with same partners on a new task. We compare with 1) a baseline (BaselineAgg) that learns a shared policy network to play with all training partners, using the average of the gradients with each partner and 2) a baseline (BaselineModular) that similarly uses a modular policy approach with separate modules for each partner, but does not explicitly represent the marginal best-response strategy, and 3) First-Order MAML (Nichol et al., 2018). For our method, we vary λ from 0.0 to 0.5: a higher value pushes the task module output to more closely match the marginal action distribution. We also test the use of a low dimensional z (the interface between the task and partner module). In general, we find that our method with regularization performs the best, and that it is unclear if using a low dimensional z is helpful. The partners in our experiments were either generated via self-play or were hand-designed to use varying conventions.
Contextual Bandit We study the collaborative contextual bandit task described in Section 3, using 4 contexts and 8 arms. We study variations of the task by altering the number of contexts with more than one good option (i.e. symmetries). We write “Arms m” to refer to a task having m contexts with symmetries – coordination should be easier for tasks having fewer contexts with symmetries.
In Figure 6a-c we show results on adaptation to new hand-designed partners (Figure 11 for self-play partners), varying the number of contexts with symmetries. We also plot the green oracle curve, by using our learning framework along with oracle best-response marginals (assuming uniform distribution over optimal actions across partners). To see if our method learns the correct marginals, we measure the Wasserstein distance between the learned marginals and the oracle marginals, in Figure 7. Using λ = 0.5 produces task module outputs that closely match the oracle marginals. To test zero-shot coordination with the same partners on new tasks, we tweak the task by altering contexts with only one good action, keeping the contexts with symmetries the same (similar to the change in Figure 2). We use hand-designed partners to make sure that partners use the same conventions at unchanged contexts. In Figure 10 we see that our method outperforms BaselineModular.
We include an additional plot in Figure 6d, where we instead use the Train task from the user study as shown in Figure 5. We use the data collected in our user study to create the partner policies, in lieu of the hand-designed partners. We observe similar trends – that the modular learning framework with marginal regularization performs the best.
Block Placing Next, we study a collaborative block placing task which involves a 2× 2 goal-grid, with a single red block and a single blue block on the grid. The goal is to reconstruct the goal-grid from a blank working-grid, with partner 1 (P1) only controlling a red block and partner 2 (P2) only controlling a blue block. The partners take turns moving their blocks on the working-grid, and can see each other’s actions on the working-grid. However, only P1 can see the goal-grid configuration; P2 only receives information from the working-grid. Each game lasts 6 turns, with P1 taking the first turn. Specifically, on P1’s turn, it can move the red block to one of the four cells, remove it from the grid, or pass the turn (similarly for P2 / blue block). Blocks cannot be placed on top of each other. The partners share rewards, and earn 10 points for each correctly placed block, for a maximum of 20 points per game. P1 needs to both place its own red block correctly, and communicate the position of the blue block to P2. As such, success in this task requires a mixture of rule-dependent and convention-dependent skills.
In Figures 9a and 9b we plot the results on adaptation to new hand-designed and self-play partners, where our method outperforms the baselines for non-zero values of λ. To verify that our task module is learning a good task representation, we compute the distance between the task module output and the oracle marginal (Figure 7). We see that our approach with λ = 0.5 leads to a smallest gap from the oracle marginals. Finally, similar to the contextual bandit task, we test zero-shot coordination with the same partners on new tasks in Figure 10. We tweak the task by changing the colors of target blocks, and use hand-designed partners to ensure the developed conventions persist. Again, our method performs better on the new task/partner combination than BaselineModular. We provide more details of the experimental setup in the Appendix.
Hanabi Finally, we experiment on a light version of the collaborative card game Hanabi, which has been studied in many related works in MARL. We focus on the two-player version of Hanabi, with 1 color and 5 ranks (for a maximum score of 5). Since it is not straightforward how to create a
variety of hand-designed partners, or tweak the task while maintaining the same symmetries, we use self-play partners only and omit experiments on coordinating on a new task with the same partners. The results on adaptation to new self-play partners are shown in Figure 9c.
7 CONCLUSION
We study the problem of adapting to new settings in multi-agent interactions. To develop AI-agents that quickly adapt to new partners and tasks, we introduced a framework that learns rule-dependent and convention-dependent representations. Our AI agents can adapt to conventions from new partners without re-learning the full complexities of a task, and can carry over existing conventions with same partners on new tasks. We run a study of human-human interactions that suggests human conventions persist across tasks with similar symmetries. Our experiments show improvements in adapting to new settings on a contextual bandit task, a block placing task, and Hanabi.
ACKNOWLEDGMENTS
This work was supported by NSF (#1941722, #2006388), and the DARPA HICON-LEARN project.
A PROOF OF LEMMA 1
Proof. The marginal of the best-response strategy across partners at state s of a task with reward R can be written as: p(ae) = Eρ∼θρ [f(s, ae, ρ(s,QRs ))] When the ego agent is deterministic, its best-response strategy f(s, ae, ap) can be rewritten as a function f ′ where f ′(s, ap) = ae if f(s, ae, ap) = 1. Then, we can write the likelihood of a best-response strategy from a newly sampled partner as a function of the marginal:
Pρ∼θρ [f ′(s, ρ(s,QRs )) = ae] = p(ae)
Loosely speaking, when the best-response strategies are not deterministic, storing only their marginals throws away information. But in the case when the strategies are deterministic, each strategy can be represented by a single action, so we are simply showing that storing their marginals does not throw away information.
B ARCHITECTURE DETAILS AND HYPERPARAMETERS
The specific size of each module in Figure 4 is shown in Table 1. We train the policy network using Proximal Policy Optimization (PPO) (Schulman et al., 2017) with the Stable Baselines software package (Raffin et al., 2019). For the state-value function, we use a network of the same size, and combine the value predictions from the task module and the partner module by adding them together.
C EXPERIMENTAL SETUP
Code for the experiments in our paper is available at https://github.com/ Stanford-ILIAD/Conventions-ModularPolicy.
C.1 CONTEXTUAL BANDIT
Task setup We use a contextual bandit with 4 contexts and 8 arms. We label the contexts 1, . . . , 4 and the arms 1, . . . , 8. In the initial setup, at context i the optimal actions (green boxes) are arms i
and i+ 4. So, all 4 contexts have symmetries, since there are multiple equally optimal actions. We experiment with different versions of the task by changing the number of contexts with symmetries, from 4 down to 1. To get m contexts with symmetries, we set the optimal action of a context i to only arm i, if i > m. So, contexts [1, . . . ,m] have two equally optimal actions, and contexts [m+1, . . . , 4] have only one optimal action. Naturally, when there are fewer contexts with symmetries, coordination is easier.
Partner setup We use both partners obtained by self-play and hand-designed partners. For the selfplay partners, we train using PPO with the hyperparameters shown in Table 2. For the hand-designed partners, the partners were designed to pick any one of the equally optimal actions and to stick to the same choice. The distribution over choices across the hand-designed partners was made to be uniform over the optimal actions. We used a set of 10 self-play partners and 4 hand-designed partners for training, and a set of the same size of testing.
Tweaked task For coordinating on a new task (Figure 10), we change the optimal action for contexts where there no symmetries. As such, this is only possible when the number of symmetries m is less than 4. For the new task, a context i where i > m, we modify the optimal action from i to i+ 4. The contexts with symmetries remain unchanged. For the hand-designed partners, we ensured that they stuck to the same choices (conventions) as the initial task in the contexts with symmetries.
C.2 BLOCK PLACING
Task setup We used the task setup described in Section 6. In our experiments, the ego agent is P1 and the partner agents are P2.
Partner setup We use both partners obtained by self-play and hand-designed partners. For the self-play partners, we train using PPO with the hyperparameters shown in Table 2. For the handdesigned partners, we design the partners to interpret the first action (turn 1) of the ego agent based on a fixed permutation of the block positions. For example, a possible partner may always place the blue block top-right whenever the ego agent places the red block top-left on turn 1. We used a set of 6 self-play partners and 3 hand-designed partners for training, and a set of 6 self-play partners and 6 hand-designed partners for testing.
Tweaked task For coordinating on a new task (Figure 10), we change the reward function by reversing the interpretation of the red and white cells in the goal grid. In other words, P1 has to place the red block on the working grid in one of the two positions that is a white cell on the goal grid. P2’s goal is unchanged; P2 still has to place the blue block where the blue block appears in the goal grid, so the relevant symmetries remain unchanged. For the hand-designed partners, we ensured that they stuck to the same permutations (conventions) as the initial task when interpreting the ego agent’s action on turn 1.
C.3 HANABI
Task setup We used the Hanabi Learning Environment package (Bard et al., 2020), with the following configuration: 1 color, 5 ranks, 2 players, hand size 2, 3 information tokens, and 3 life tokens. The maximum score is 5 points.
Partner setup We use only partners obtained by self-play. For the self-play partners, we train using PPO with the hyperparameters shown in Table 2. We used a set of 4 self-play partners for training, and a set of the same size for testing.
C.4 ADDITIONAL PLOTS
C.5 USER STUDY INDIVIDUAL RESULTS
Here we provide more detailed results from our user study. For each context and task we report the joint actions made by each pair of users on their first try and on their fifth try. | 1. What are the strengths and weaknesses of the proposed approach in adaptive human-AI collaboration?
2. How does the work advance our understanding of the interplay between partner-specific and task-general behavior?
3. What are the goals of the study, and how do they contribute to the field?
4. How does the work differ from previous investigations, and what sets it apart?
5. Can the authors provide more detail on what they mean by "task" and "context"?
6. How does the assumption of uncorrelated behavior across states impact the results, and is it a realistic assumption?
7. How do the experiments vary across individuals, and what insights can be gained from analyzing individual-level variation?
8. How does the work address the issue of recognizing tasks and contexts?
9. Are there any limitations or open questions left unanswered by the study?
10. How might the findings of the study be applied in practical scenarios, such as in human-robot interaction or multi-agent systems? | Review | Review
==== On the Critical Role of Conventions in Adaptive Human-AI Collaboration ====
Summary
This paper studies how artificial agents can be endowed with the human-like capability to, one the one hand, retain behaviors that best fit the task environment(s) when no equally good alternatives exist and, on the other, transfer arbitrary partner-specific conventions to other tasks. Addressing this challenge is important. Chiefly, it promises to improve the number of iterations needed to converge on optimal behavior for cases where analogous strategies were already converged on in a different task / with the same partner. This work proposes to achieve this by combining two separate learned representations. One for partner-specific behaviors. The other for the task itself. The latter module, reused across partners, ends up regulating behavior for cases where only one optimal actions exists. When multiple optimal actions exist there is some slack for players to explore and converge on an arbitrary optimal action. In this sense, agents are endowed with the ability to reuse optimal action policies that will work across agents as well as to reuse optimal partner-specific policies when faced with a context with multiple solutions.
Reasons for score
I ultimately decided for rejection. This work has many merits: the topic is very important; it is of relevance to many fields; the approach is sound and the experiments interesting. However, I fail to see how much this work advances our understanding of the interplay between partner-specific and task general behavior. The main reason is that, while sound and straightforward to follow, it leaves open a lot of crucial questions (e.g., How do we recognize what a task/context is? Is the separation of the modules motivated? If so, how? What do we learn from the human experiment that we didn't before? See "Cons" below for details). I'm very happy to be convinced otherwise but I don't think that these concerns can be addressed in the present submission.
Pros
Interesting and important topic that is relevant to many fields
Technically sound approach and clear exposition.
Comprehensive experimentation. I really liked that the approach was put to the test across agents, human and artificial, and tasks (Contexual Bandit, Block Placing; Hanabi)
Cons
Goals: My main issue is that the goals of this study are unclear. This work would be greatly enhanced by clarifying what they are and what is ultimately achieved. I do not think its true or fair to state that, as the authors put it in the abstract, "current approaches have not attempted to distinguish a task and the conventions used by a partner, and [that] more generally there has been little focus on leveraging conventions for adapting to new settings". To name just one of many examples, in the linguistics literature this distinction is standardly made and has long been studied (e.g., by Clark & Wilkes-Gibbs, 1986; Clark 1996; Hawkins et al. 2017, all cited in this work). This is also how semantics vs. pragmatics is defined in game-theoretic approaches to language use & dialog (e.g., Franke 2009, "Signal to Act"; or Brochhagen 2017, " Signalling under Uncertainty: Interpretative Alignment without a Common Prior"), as well as how it is generally understood within Gricean pragmatics. In other words, the distinction has been made and rests on a long philosophical tradition. I therefore don't think that taking this separation seriously alone is enough to motivate this investigation.
Further motivations: In a similar vein to the point above, there's a lack of detail on what sets this work apart from previous investigations. For instance, the critique that "[...]" all these frameworks [...] quickly become intractable" is not very strong in light of the existence of approximations and solutions for the models mentioned (e.g., Monroe 2018's "Learning in the Rational Speech Acts Model", which also uses neural networks to model rule-dependent behavior).
Clarification: In what sense are Train and Test different tasks? (Section 5)
Analysis: The experiment in Section 5.1 averages across participants. I'd suggest looking at / reporting individual-level variation. These kinds of experiments usually vary a lot from individual to individual (see, e.g., Kanwal et al. 2017, "Zipf’s Law of Abbreviation and the Principle of Least Effort"). Population-level averages can therefore inadvertently hide or misrepresent what subjects are actually doing.
On page 4, the authors state: "we assume that behavior at different states are uncorrelated: a choice of action at one state tells us nothing about actions at other states." I take it that this assumption is made for simplicity's sake. However, isn't this also an integral part of human-like abilities? If I realize my partner's behavior accords --or does not accord-- to some conventions I had already established, I might as well behave accordingly. For instance, if I'm playing a game of chess with someone I might match their level of expertise (e.g., avoid castling with a complete beginner); and if I'm speaking with someone at a conference, I might change my phrasing based on theirs based on what I believe to be their background to be.
Questions during rebuttal period
See cons above
Updated reviewer
The authors have addressed my main concerns satisfactorily, in a clear and concise matter. I have updated my recommendation to reflect this.
Minor comments
Figure 6 has a different y-axis across plots. This is confusing at a fast glance, and makes the subplots' comparison quite hard. |
ICLR | Title
A Variance Reduction Method for Neural-based Divergence Estimation
Abstract
A central problem in machine learning is the computation of similarity or closeness between two (data) distributions. The applications span from generative modelling via adversarial training, representation learning, and robustness in out-ofdistribution settings, to name a few. A palette of divergences, mutual information, and integral probability metrics are indispensable tools for measuring the “distance” between distributions and these are made tractable in high dimensional settings through variational representation formulas. Indeed, such formulas transform an estimation problem into an optimization problem. Unfortunately, the approximation of expectations that are inherent in variational formulas by statistical averages can be problematic due to high statistical variance, e.g., exponential for the Kullback-Leibler divergence and certain estimators. In this paper, we propose a new variance penalty term that acts directly on the variance of each component of the statistical estimator. The power of the variance penalty is controlled by a penalty coefficient which trades off bias and variance. We tested the proposed approach on several variational formulas and synthetic examples and showed that the overall error is decreased about an order of magnitude relative to the baseline statistical estimator. Impressive results are obtained for Rényi divergence with large order values due to the improved stability of the proposed estimator. Furthermore, in real biological datasets we are able to detect very rare sub-populations with a moderate sample size. Finally, we obtain improved (in terms of objective measures) disentangled representation of speech signals into text, speaker, and style components via variance-penalized mutual information minimization.
1 INTRODUCTION
Divergences such as Kullback-Leibler (KL) divergence, f -divergences, Hellinger divergence, αdivergences and Rényi divergences, which were initially developed in the fields of information theory and statistical physics, are indispensable tools in a growing number of machine learning applications. They have been used in adversarial training of generative models (Goodfellow et al., 2014; Nowozin et al., 2016), in the estimation of generalization errors (Esposito et al., 2021) and hypothesis testing (Broniatowski & Keziou, 2009), to name a few. Mutual information (MI), in particular, which is defined as the KL divergence between the joint distribution of a pair of variables and their marginals (and can be generalized to divergences other than KL), plays a crucial role in Bayesian networks and (conditional) independence (Cheng et al., 2002), self-supervised learning via contrastive losses (van den Oord et al., 2018; Le-Khac et al., 2020) as well as in representation learning (Hjelm et al., 2019; Chen et al., 2016).
Classical divergence estimators perform reasonably well for low dimensional cases, however they scale poorly to large, high dimensional datasets which are typically encountered in modern machine learning. The most compelling estimation approach of a divergence is via the optimization of a lower variational bound parametrized by neural networks. These lower bounds, which are likelihood-free approximations, are maximized in order to compute the divergence value at the optimizer. Wellknown variational representations are the Legendre transformation of an f -divergence (Broniatowski & Keziou, 2006; Nguyen et al., 2010) as well as the Donsker-Varadhan (DV) variational formula (Donsker & Varadhan, 1983) for KL divergence and its extension to Rényi divergence (Birrell et al., 2020b). Their tractability stems from their objective functionals, which are computed from expected values and approximated using statistical averages from the available or generated samples.
Despite the scalability and tractability, the estimation of a divergence based on variational formulas is a notoriously difficult problem. One challenge stems from the potentially high bias, since any approximation for the worst case scenario requires an exponential number of samples in order to attain the true divergence value (McAllester & Stratos, 2020). Additionally, the statistical variance, which scales exponentially with respect to the divergence’s value for certain variational estimators (Song & Ermon, 2019), is often prohibitively high. Focusing on the elevated MI, there are several further lower bounds (Barber & Agakov, 2003; Belghazi et al., 2018; van den Oord et al., 2018; Poole et al., 2019; Guo et al., 2021) and a few upper bounds (Cheng et al., 2020; Poole et al., 2019) which aim to provide more reliable estimates of MI in the low sample size regime. However, the majority of these MI estimators are not transferable to the general estimation of divergences and frequently produce instabilities during training which are further magnified by the small batch and/or sample size.
In this paper, we propose to reduce a divergence estimator’s variance via an explicit variance penalty (VP) which is added to the objective functional. Our contributions are summarized as follows:
• We present a novel variance reduction penalty for f -divergence and expand it via the delta method to the nonlinear setting, including the DV formula for KL divergence as well as the variational formula for the Rényi divergences. The proposed VP is able to flexibly trade off bias and variance.
• We present numerical evidence on synthetic datasets that the proposed approach improves both mean squared error (MSE) and median absolute error (MedAE) in a range of sample sizes and types of divergences. Furthermore, we implemented the proposed VP in several other lower and upper bounds of MI, showing that our variance reduction approach is not restricted to particular variational formulas but it is generic and applicable to the majority of existing variational representations.
• When applied to real datasets, we demonstrate the ability of the proposed approach to reduce the variance of the estimated Rényi divergence, thus enabling the detection of rare biological sub-populations which are otherwise difficult to identify. Interestingly, the baseline estimator is unstable when the order value is above one, but it becomes stable when the VP is added.
• We also applied the VP to the disentangled representation learning of speech into its text, speaker, and style components. Results on objective evaluation metrics showed that the addition of the VP generally improves the training performance, as much as 18% relative to the baseline systems.
1.1 RELATED WORK
There are several general-purpose variance reduction techniques in Monte Carlo stochastic sampling, with the most popular approaches being antithetic sampling or more broadly coupling methods, control of variates and importance sampling (Robert & Casella, 2005; Glasserman, 2004; Srinivasan, 2013). These methods have not been explicitly applied for the variational divergence estimation problem. We speculate that either they are not applicable due to the unavailability of analytical probability density formulas or they are inefficient (e.g., the control of variates approach requires a second estimator and potentially a second parametric model in order to be applied).
Another way to reduce the variance is to restrict the function space to more smooth and/or controlled test (or critic) functions, balancing again between bias and variance. For instance, the restriction to Lipschitz continuous functions has the potential to reduce the variance since there exist favorable concentration inequality results for the Lipschitz space (Wainwright, 2019). In the GAN literature, Wasserstein GAN (Gulrajani et al., 2017) and spectral normalization (Miyato et al., 2018) impose Lipschitz continuity which resulted in signigicant gains in terms of training stability. Similarly, the restriction of test functions to an appropriately designed reproducing kernel Hilbert space could reduce the variance (Sreekar et al., 2020). Such approaches can be combined with our proposed variance penalties, as our formulation allows for general test-function spaces. However, we do not focus on this point here.
Given the importance of MI, several estimators aim towards improved statistical properties. Lower bounds such as MINE (Belghazi et al., 2018), which uses the DV variational formula with an expo-
nential moving average, NWJ estimator (Nguyen et al., 2010) and BA estimator (Barber & Agakov, 2003) as well as upper bounds such as CLUB (Cheng et al., 2020) still have high variance. InfoNCE (van den Oord et al., 2018) is one of the few MI estimators that has low variance, but at the cost of either high bias or high computational cost due to the need for many negative samples and thus large batch size. Poole et al. (2019) and Guo et al. (2021) aim to clarify the relationships and trade-offs between those variational bounds. A different approach to reducing variance is by appropriately working on the gradients of the objective function (Wen et al., 2020; 2021).
Finally, we discuss the approach of truncating the test function inside a bounded region as proposed in (Song & Ermon, 2019). The determination of the truncation threshold is quite difficult since it requires an a priori understanding of the log-likelihood ratio. Moreover, a high truncation threshold will not affect the estimation since a high threshold implies no real benefit in terms of variance reduction. On the other hand, a low threshold will result in large bias. Overall, using a high truncation threshold in order to avoid extreme values is a good practice even though it will have a limited impact on variance reduction.
2 BACKGROUND ON VARIATIONAL FORMULAS FOR RÉNYI AND f -DIVERGENCES.
While our variance reduction method can be applied to any divergence that possesses a variational formula, here our focus will be on the Rényi and f -divergences, including the KL divergence. For Rényi divergences an appropriate objective functional can be constructed from a difference of cumulant generating functions (Birrell et al., 2020b)
Rα(Q‖P ) = sup g∈Mb(Ω)
{ 1
α− 1 logEQ[e(α−1)g]−
1 α logEP [eαg]
} , α 6= 0, 1 . (1)
Here Q and P are probability distributions on the set Ω, EQ and EP denote the expectations with respect to Q and P respectively, andMb(Ω) is the space of bounded measurable real-valued functions on Ω. For f divergences, f being a lower semicontinuous convex function with f(1) = 0, one has the well-known Legendre transform variational formula (Broniatowski & Keziou, 2006; Nguyen et al., 2010)
Df (Q‖P ) = sup g∈Mb(Ω) {EQ[g]− EP [f∗(g)]} , (2)
where f∗(y) = supx∈R{yx − f(x)} is the Legendre transform of f . Here and in the following, the function of g that is being optimized will be called the objective functional. Equation (2) can be generalized to the (f,Γ)-divergences (Birrell et al., 2020a), where Γ ⊂ Mb(Ω) is a restricted test-function space
DΓf (Q‖P ) = sup g∈Γ {EQ[g]− ΛPf [g]} , (3)
ΛPf [g] = inf ν∈R {ν + EP [f∗(g − ν)]} . (4)
In particular, if fKL(x) = x log(x) corresponds to the KL divergence then
ΛPfKL [g] = log(EP [exp(g)]) ≡ Λ P [g] (5)
is the classical cumulant generating function and equation (3) (with Γ = Mb(Ω)) becomes the Donsker-Varadhan variational formula (Dupuis & Ellis., 1997, Appendix C.2)
DKL(Q‖P ) = sup g∈Mb(Ω) {EQ[g]− logEP [eg]} . (6)
For general f , we will often write equation (3) as
DΓf (Q‖P ) = sup g∈Γ,ν∈R {EQ[g − ν]− EP [f∗(g − ν)]} (7)
and if Γ is closed under the shifts g 7→ g − ν, ν ∈ R then we can write it simply as DΓf (Q‖P ) = sup
g∈Γ {EQ[g]− EP [f∗(g)]} . (8)
In particular, if Γ = Mb(Ω) then DΓf = Df . The generalizations of Rényi and KL divergence obtained by using a restricted space Γ in place of Mb(Ω) in equation (1) or equation (6) will be denoted by RΓα and D Γ KL, respectively.
3 STATISTICAL ESTIMATORS AND VARIANCE REDUCTION
Variational representations of divergences are especially useful for creating statistical estimators in a data-driven setting; a naive estimator is obtained by simply replacing expectations with the corresponding statistical averages in any of the equations (1), (2), (3), etc. More formally, the naive estimators can be written as DΓf (Qn‖Pn), RΓα(Qn‖Pn), etc., where Γ is some parameterized space of functions (e.g., a neural network), Qn and Pn are the n-sample empirical measures from Q and P respectively (i.e., EPn [g] = 1n ∑n j=1 g(Xj) where Xj are i.i.d. samples from P and similarly for EQn ; we also assume that the samples from Q and P are independent of one another), and the divergences are expressed in terms of the variational formulas from Section 2. However, in practice these naive methods often suffer from high variance (Song & Ermon, 2019; Birrell et al., 2020b). We address this via variance-penalized divergences, which are constructed by introducing a variance penalty into the objective functional of the variational representation, e.g.,
Dλf (Q‖P ) ≡ sup g∈Mb(Ω) {EQ[g]− EP [f∗(g)]− λV [g;Q,P ]} , (9)
where the variance penalty, λV , is proportional to the variance of EQn [g]−EPn [f∗(g)] with strength λ > 0. Using this, we construct the following divergence estimator
sup η {EQn [gη]− EPn [f∗(gη)]− λV [gη;Qn, Pn]} , (10)
where gη is a neural network with parameters η. Similar variance penalties can be derived to other divergences with variational representations.
3.1 VARIANCE PENALTY
In this subsection we provide details on the variance penalty for (f,Γ)-divergences, the KLdivergence, and Rényi divergences. The same framework can be repeated to other divergences with a variational representation.
To introduce the variance penalty, first consider the (f,Γ)-divergence representation equation (8). Our goal is to penalize g’s for which EQn [g] or EPn [f∗(g)] have large variance, hence we introduce a penalty term proportional to (VarQ denotes variance with respect to Q, etc.)
Var [EQn [g] + EPn [f∗(g)]] = 1
n (VarQ[g] + VarP [f∗(g)]) . (11)
Specifically, for λ > 0 we define the variance-penalized (f,Γ)-divergence
DΓ,λf (Q‖P ) ≡ sup g∈Γ,ν∈R {EQ[g − ν]− EP [f∗(g − ν)]− λ(VarQ[g − ν] + VarP [f∗(g − ν)])} .
(12)
As noted above, if Γ is invariant under constant shifts then the optimization over ν can be omitted. A similar result to equation (12) can be derived for any objective functional that is a linear combination of expectations, e.g., integral probability metrics (Müller, 1997; Sriperumbudur et al., 2012) such as the Wasserstein metric.
For nonlinear objective functional terms of the generic form G(EP [h(g)]), such as appear in equation (1) and equation (6), we cannot compute the variance of the corresponding statistical estimator at finite n but we can use the delta method to obtain the asymptotic variance
lim n→∞
nVar [G(EPn [h(g)])] = (G′(EP [h(g)]))2VarP [h(g)] . (13)
Thus, we propose for the nonlinear case to use the above asymptotic variance as a penalty and obtain the following variance-penalized KL and Rényi divergence variational formulas:
DΓ,λKL (Q‖P ) ≡ sup g∈Γ
{ EQ[g]− logEP [eg]− λ ( VarQ[g] + VarP [eg]/(EP [eg])2 )} , (14)
RΓ,λα (Q‖P ) ≡ sup g∈Γ
{ 1
α− 1 logEQ[e(α−1)g]−
1 α logEP [eαg] (15)
− λ (
1 (α− 1)2 VarQ[e(α−1)g] (EQ[e(α−1)g])2 + 1 α2 VarP [eαg] (EP [eαg])2
)} .
Remark 1. Both equation (11) and equation (13) suggest that the statistical estimators for the above penalized divergences should use a variance penalty strength that decays with the sample size λ = λ0/n, though other forms of n-dependence may be useful in practice.
Though the variance penalty introduces bias, as λ → 0 the penalized divergence converges to the corresponding non-penalized divergence, as made precise by the following theorem. Theorem 2. Let Γ ⊂Mb(Ω). We have the following convergence results:
lim λ→0+
DΓ,λKL (Q‖P ) = D Γ KL(Q‖P ) , (16)
lim λ→0+
RΓ,λα (Q‖P ) = RΓα(Q‖P ) , (17)
and if f∗(y) <∞ for all y ∈ R then
lim λ→0+
DΓ,λf (Q‖P ) = D Γ f (Q‖P ) . (18)
Moreover, under fairly general assumptions it holds that
lim λ→∞ DΓ,λKL (Q‖P ) = lim λ→∞ RΓ,λα (Q‖P ) = lim λ→∞ DΓ,λf (Q‖P ) = 0 . (19)
Remark 3. Note that the corresponding statistical estimators, DΓ,λf (Qn‖Pn), etc., have additional bias due to the supremum over g. We present partial results on bias bounds in Appendix D.
The proof of Theorem 2 is given in Appendix B for the zero limit and Theorem 9 for the infinity limit. The same proof techniques can be applied to other divergences with a variational characterization.
Finally, for non-zero λ the penalized divergences (12), (14), (15) retain the divergence property and are therefore appropriate for quantifying the “distance” between probability distributions: Theorem 4. Under fairly general assumptions on f and Γ (see Appendix B for details) and letting DΓ,λ denote any of DΓ,λf , D Γ,λ KL , or R Γ,λ α we have D
Γ,λ(Q‖P ) ≥ 0 and DΓ,λ(Q‖P ) = 0 if and only if Q = P .
The proof of Theorem 4 can be found in Appendix B.
3.2 VARIANCE-REDUCED DIVERGENCE ESTIMATION ALGORITHM
We now propose the following divergence neural estimation (DNE) methods with variance penalty, generalizing equations (9)-(10). (DNE-VPλ) sup
η {H[gη;Qn, Pn]− λV [gη;Qn, Pn]} . (20)
We will compare the above method to the non-penalized estimator (i.e., with λ = 0) (DNE) sup
η H[gη;Qn, Pn] . (21)
In the above, the test function space is a neural network Γ = {gη, η ∈ E} with parameters η and H denotes the objective functional of the divergence, e.g., for the Rényi divergences (1)
Hα[g;Q,P ] = 1
α− 1 logEQ[e(α−1)g]−
1 α logEP [eαg] , α 6= 0, 1 (22)
and for f divergences (2) Hf [g;Q,P ] = EQ[g]− EP [f∗(g)] . (23)
Finally, V is the variance penalty corresponding to the chosen divergence (see Section 3.1), e.g., for Rényi divergences
Vα[g;Qn, Pn] = 1 (α− 1)2 VarQn [e(α−1)g] (EQn [e(α−1)g])2 + 1 α2 VarPn [eαg] (EPn [eαg])2 , α 6= 0, 1 (24)
and for f divergences Vf [g;Qn, Pn] = VarQn [g] + VarPn [f∗(g)] . (25)
We solve equation (20) via Adam algorithm (Kingma & Ba, 2014); a stochastic gradient descent method.
4 RESULTS ON SYNTHETIC DATASETS
Figure 1 presents the statistical estimation of Rényi divergence between two one-dimensional Gaussians which both have zero mean but different variance values. The order of Rényi divergence, α, controls how much weight to put on the tails of the distributions, thus it can become very sensitive to the few samples from the tails. The same conclusion can be deduced from the variational formula (i.e., equation (1) where α multiplies the exponentials’ argument). Therefore, a larger α value implies larger statistical variance. Indeed, high estimation variance is observed with DNE (upper leftmost panel of Figure 1) despite the fact that we applied truncation as proposed by Song & Ermon (2019) with truncation threshold set to 1. In contrast, the DNE-VPλ estimator with λ = 0.1 greatly reduces the statistical variance even when α is large (lower leftmost panel). For fairness, we imposed the same truncation operation in the output of DNE-VPλ. We report a 80% reduction of variance for α = 2 which becomes 99% for α = 10.
The proposed approach introduces an additional hyper-parameter, λ, which controls the strength of the VP. Our theory suggests that λ should depend on the sample size (and perhaps also on the other parameters), therefore we perform two sets of experiments. In the first experiment, we explore the range of optimal values for λ in terms of MedAE1. As is evident from the middle panels of Figure 1, λ-values in the vicinity of 0.1 are a reasonable compromise between variance and bias. In the second experiment, we demonstrate the performance in terms of MedAE as a function of the sample size, N . As suggested in Remark 1, monotone performance is obtained when λ is inversely proportional to N (blue dashed line in rightmost upper panel of Figure 1).
Our second synthetic example constitutes the estimation of MI using various approaches with and without VP. Here, we let Q be a zero-mean multivariate correlated Gaussian random vector of dimension d. We impose element-wise correlation, i.e., corr(xi, x d
2 +j ) = δi,jρ to the samples x ∼ Q
1Recall that MedAE stands for median absolute error and it is a more robust-to-outliers metric.
where i, j = 1, . . . , d2 and δi,j is Kronecker’s delta. With P we denote the product of the marginals, which in this case is simply a zero-mean standardized multivariate Gaussian. Figure 2 presents the estimated MI per training step. We consider the Renyi-based MI with α = 0.5 as well as the standard MI using the DV variational formula. Notice that these two variants result in different true values (black lines in Figure 2). The plotted results demonstrate the successful reduction of variance when VP is added to the objective functional. Interestingly, the extension of VP to InfoNCE and CLUB estimators (second row of panels in Figure 2) implies that our approach can be applied to any MI estimators, thus offering a general variance reduction framework. Bias, variance and MSE plots as well as several more experiments can be found in Appendix F.
5 REAL DATA APPLICATIONS
5.1 DETECTING RARE BIOLOGICAL SUB-POPULATIONS
Using the dataset from Levine et al. (2015), we test the efficacy of DNE-VPλ in discriminating cell populations which are contaminated with a rare sub-population with distinguishable statistical properties. Specifically, we consider single-cell mass cytometry measurements on 16 bone marrow protein markers2 (i.e., d = 16) coming from healthy and diseased individuals with acute myeloid leukemia. For each run we created three subsets of healthy samples with sample size N = 20K which we denote by P and one dataset as a mixture of 99% healthy and 1% diseased samples which is denoted by Q. Notice that the actual number of diseased samples is only 200 thus it is considered as a rare sub-population.
For Rényi divergence with α = 0.5 (left panels in Figure 3), both DNE and DNE-VPλ are stable. Despite the improvement in the separation of the two histograms, the observed variance reduction
2Data was accessed from https://community.cytobank.org/cytobank/experiments/ 46098/illustrations/121588
of DNE-VPλ is minimal and not enough to discriminate between the healthy and the contaminated with 1% diseased samples distributions. When considering Rényi divergence with α = 1.1, we observe that DNE fails to produce stable estimates. In contrast, DNE-VPλ always computes stable estimates. Additionally, the two histograms are satisfactorily separated, implying that larger values of α are crucial, provided there is a way to handle the statistical variance. For completeness, Table 1 reports the first and second order statistics of the histograms shown in Figure 3.
5.2 DISENTANGLED REPRESENTATION LEARNING IN SPEECH SYNTHESIS
An important application of MI is disentangled representation learning. In the context of representation disentanglement, the extraction of meaningful latent features for high-dimensional data is challenging, especially when explicit knowledge needs to be distilled into interpretable representations. One popular approach to enforce representation disentanglement is via MI minimization. Moreover, a superior disentanglement will allow a greater degree of interpretability and controllability, especially for generative models maintaining high production capacity. In this section, we employ the proposed DNE-VPλ estimator for MI estimation in order to learn disentangled representation, and, particularly, in the context of speech synthesis and analysis.
A universal text-to-speech synthesizer can generate speech from text with speaker factor and speaking style similar to a reference signal. Previous works aimed to encode the information from reference speech into a fixed-length style and speaker embedding using trainable encoders (Wang et al., 2018; Tjandra et al., 2020; Chien et al., 2021; Tan et al., 2021). The major challenges for such speech synthesizers are controllability and generalisability, especially when trying to generalize the models with multiple speakers and multiple styles. During training, content information is leaked into the style embeddings (“content leakage”) and speaker information into style embeddings (“style leakage”). Thus at inference, when the reference speech has different content from the input text, the decoder expects the content from the style vector ignoring some part of the content text. Moreover, speaker information could be expected from the style encoder leading to completely different speaker attribute. To alleviate that, Paul et al. (2021) suggested replacing the KL-based MI with Rényi-based MI and minimizing the Rényi divergence between the joint distribution and the product of marginals for the content-style and style-speaker pairs. However, reliable estimation of Rényi divergence was problematic due to high statistical variance. Taking advantage of the proposed variance reduction technique, we employ a VP term in the loss function which is denoted as DNE-VPλ (Rα). By doing so, content, style, and speaker spaces become representative and (ideally) independent of each other. We introduce two variations of this framework: sum of three Rényi divergences DNE(Rα=0 + Rα=0.5 + Rα=1) (i.e., sum of the corresponding objective functionals) and DNE(Rα=0.5). We tested several different λ values, aiming to reduce the statistical variance of the adversarial component. Notice that larger λ values were helpful in this application.
We evaluate the performance of disentanglement strategies using three performance scores from 100 random samples shown in Table 2. Mel-cepstral distortion (MCD) measures the spectral distance between the synthesized and reference mel-spectrum features. Root mean squared error (RMSE) evaluates the similarity in F0 modeling between reference and synthesized speech. Lastly, the content preservation criterion is evaluated by word error rate (WER). During inference, we evaluate the performance on two conditions: ‘no shuffle’ and ‘shuffle’. During inference, ‘no shuffle’ feeds the same reference speech into style and speaker encoders and its corresponding text to predict the speech features, whereas ‘shuffle’ feeds random speech. We observe that the proposed DNE-VPλ variants outperform baseline approaches without VP in terms of all evaluation metrics. Our proposed systems greatly reduced content leakage by improving the word error rate by approximately 5-18% relative to the baseline systems. Furthermore, RMSE-F0 and MCD scores show that the disentanglement module during training assists the TTS to achieve more accurate rendering of prosodic patterns as well as synthesizing proper speech content to its corresponding text without any significant leakage issues. | 1. What is the focus of the paper regarding variance regularization?
2. What are the strengths and weaknesses of the proposed approach?
3. Do you have any concerns regarding the experimental setup and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any specific questions or aspects that the reviewer would like to know more about or clarify? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces a novel variance regularization to reduce the statistical variance of the variational based estimators for f-divergences and Renyi's divergence. The proposed regularization is based on the well-known delta method which provides the asymptotic variance and depends on the specific from of the variational bound under consideration. The approach is tested on different synthetic data sets for which the numerical results show that the variance is decreased relatively to the baseline estimator. In particular, this effects appears be more visible for high-orders of the Renyi's divergence. Finally, an application is provided for real biological datasets, disentanglement for speech signals into text, speaker and style components.
Review
The paper present an effective method to reduce the variance of divergence estimators based on variational bounds. In summary, the strengths and weaknesses of the paper are listed below:
** strengths **
The investigated problem is relevant and the paper is well-written and the contributions clearly presented.
For the considered synthetic scenarios the proposed approach improves the mean square error which seems to validate the underlying hypothesis of the regularzation term introduced to correct the variance.
Although the main idea of exploiting the asymptotic expression of the variance (based on the delta method) to introduce a penalty term that controls the variance is not surprising, the application within the framework of estimation of diverges based on variational bounds and the effectiveness of the method seems to be interesting.
** weaknesses **
The main misconception of this work relies on the fact that the synthetic experiments provided in Section 4 are too simplistic to be able to validate the proposed method. More precisely, (i) the regimes under which the estimators are investigated are rather optimistic since a large amount of samples are considered (e.g. 512K); (ii) there is no experiments on multimodal (mixtures) distributions or other distributions than Gaussian distributions (e.g., there are many possible examples where the underlying information measure does not change but the the distribution of the samples can be very much different by introducing complex transformations); (iii) there is no studies analysing the effects of the batch size, the effects of the underlying optimisers used to obtain the variational lower bounds (e.g., what about the use of batch normalisation?; (iv) there is no investigation of the varice reduction under realistic high dimensional datasets (e.g. SVHN, imagenet, etc.)
The empirical results are compared with some previous works but not state-of-the-art estimators. For example, CLUB has been improved in [A] and see references in therein for other missing methods and possible additional non-Gaussian scenarios to study. Indeed, this estimator is also applied to the problem of disentanglement and thus, it is relevant to compare with the results obtained on Section 5.2. A comparison with SOTA methods is fundamentally important to validate the effectiveness of the proposed method. Which optimiser is used for the experiments in Fig. 2 ?
The resulting estimator after the variance regularization should also be compared to the work in [B] which studies for a particular case the asymptotic variance the f-divergence estimators. What can be said about the variance of the proposed estimation with the framework of [B] ?
It is claimed that "the majority of the existent estimators for MI are not transferable to the general estimation of the divergences and frequently produce instabilities during training". This claim is not valid or at least not clear enough since the estimation methods that will be consider in this paper follow the same ideas (i.e., they are based on variational bounds).
Similar comments apply to the empirical results presented in sections 5.1 and 5.2. For example, there is no comparison of the variance reduction compared to the reference [C] which is particular relevant for the investigated framework in Section 5.2.
** References ** [A] A Novel Estimator of Mutual Information for Learning to Disentangle Textual Representations by P. Colombo et al, ACL 2020. [B] Practical and Consistent Estimation of f-Divergences by Paul K. Rubenstein et al., NeurIPS 2019. [C] Adversarial Disentanglement of Speaker Representation for Attribute-Driven Privacy Preservation by Paul-Gauthier Noé et al., Interspeech 2021 |
ICLR | Title
A Variance Reduction Method for Neural-based Divergence Estimation
Abstract
A central problem in machine learning is the computation of similarity or closeness between two (data) distributions. The applications span from generative modelling via adversarial training, representation learning, and robustness in out-ofdistribution settings, to name a few. A palette of divergences, mutual information, and integral probability metrics are indispensable tools for measuring the “distance” between distributions and these are made tractable in high dimensional settings through variational representation formulas. Indeed, such formulas transform an estimation problem into an optimization problem. Unfortunately, the approximation of expectations that are inherent in variational formulas by statistical averages can be problematic due to high statistical variance, e.g., exponential for the Kullback-Leibler divergence and certain estimators. In this paper, we propose a new variance penalty term that acts directly on the variance of each component of the statistical estimator. The power of the variance penalty is controlled by a penalty coefficient which trades off bias and variance. We tested the proposed approach on several variational formulas and synthetic examples and showed that the overall error is decreased about an order of magnitude relative to the baseline statistical estimator. Impressive results are obtained for Rényi divergence with large order values due to the improved stability of the proposed estimator. Furthermore, in real biological datasets we are able to detect very rare sub-populations with a moderate sample size. Finally, we obtain improved (in terms of objective measures) disentangled representation of speech signals into text, speaker, and style components via variance-penalized mutual information minimization.
1 INTRODUCTION
Divergences such as Kullback-Leibler (KL) divergence, f -divergences, Hellinger divergence, αdivergences and Rényi divergences, which were initially developed in the fields of information theory and statistical physics, are indispensable tools in a growing number of machine learning applications. They have been used in adversarial training of generative models (Goodfellow et al., 2014; Nowozin et al., 2016), in the estimation of generalization errors (Esposito et al., 2021) and hypothesis testing (Broniatowski & Keziou, 2009), to name a few. Mutual information (MI), in particular, which is defined as the KL divergence between the joint distribution of a pair of variables and their marginals (and can be generalized to divergences other than KL), plays a crucial role in Bayesian networks and (conditional) independence (Cheng et al., 2002), self-supervised learning via contrastive losses (van den Oord et al., 2018; Le-Khac et al., 2020) as well as in representation learning (Hjelm et al., 2019; Chen et al., 2016).
Classical divergence estimators perform reasonably well for low dimensional cases, however they scale poorly to large, high dimensional datasets which are typically encountered in modern machine learning. The most compelling estimation approach of a divergence is via the optimization of a lower variational bound parametrized by neural networks. These lower bounds, which are likelihood-free approximations, are maximized in order to compute the divergence value at the optimizer. Wellknown variational representations are the Legendre transformation of an f -divergence (Broniatowski & Keziou, 2006; Nguyen et al., 2010) as well as the Donsker-Varadhan (DV) variational formula (Donsker & Varadhan, 1983) for KL divergence and its extension to Rényi divergence (Birrell et al., 2020b). Their tractability stems from their objective functionals, which are computed from expected values and approximated using statistical averages from the available or generated samples.
Despite the scalability and tractability, the estimation of a divergence based on variational formulas is a notoriously difficult problem. One challenge stems from the potentially high bias, since any approximation for the worst case scenario requires an exponential number of samples in order to attain the true divergence value (McAllester & Stratos, 2020). Additionally, the statistical variance, which scales exponentially with respect to the divergence’s value for certain variational estimators (Song & Ermon, 2019), is often prohibitively high. Focusing on the elevated MI, there are several further lower bounds (Barber & Agakov, 2003; Belghazi et al., 2018; van den Oord et al., 2018; Poole et al., 2019; Guo et al., 2021) and a few upper bounds (Cheng et al., 2020; Poole et al., 2019) which aim to provide more reliable estimates of MI in the low sample size regime. However, the majority of these MI estimators are not transferable to the general estimation of divergences and frequently produce instabilities during training which are further magnified by the small batch and/or sample size.
In this paper, we propose to reduce a divergence estimator’s variance via an explicit variance penalty (VP) which is added to the objective functional. Our contributions are summarized as follows:
• We present a novel variance reduction penalty for f -divergence and expand it via the delta method to the nonlinear setting, including the DV formula for KL divergence as well as the variational formula for the Rényi divergences. The proposed VP is able to flexibly trade off bias and variance.
• We present numerical evidence on synthetic datasets that the proposed approach improves both mean squared error (MSE) and median absolute error (MedAE) in a range of sample sizes and types of divergences. Furthermore, we implemented the proposed VP in several other lower and upper bounds of MI, showing that our variance reduction approach is not restricted to particular variational formulas but it is generic and applicable to the majority of existing variational representations.
• When applied to real datasets, we demonstrate the ability of the proposed approach to reduce the variance of the estimated Rényi divergence, thus enabling the detection of rare biological sub-populations which are otherwise difficult to identify. Interestingly, the baseline estimator is unstable when the order value is above one, but it becomes stable when the VP is added.
• We also applied the VP to the disentangled representation learning of speech into its text, speaker, and style components. Results on objective evaluation metrics showed that the addition of the VP generally improves the training performance, as much as 18% relative to the baseline systems.
1.1 RELATED WORK
There are several general-purpose variance reduction techniques in Monte Carlo stochastic sampling, with the most popular approaches being antithetic sampling or more broadly coupling methods, control of variates and importance sampling (Robert & Casella, 2005; Glasserman, 2004; Srinivasan, 2013). These methods have not been explicitly applied for the variational divergence estimation problem. We speculate that either they are not applicable due to the unavailability of analytical probability density formulas or they are inefficient (e.g., the control of variates approach requires a second estimator and potentially a second parametric model in order to be applied).
Another way to reduce the variance is to restrict the function space to more smooth and/or controlled test (or critic) functions, balancing again between bias and variance. For instance, the restriction to Lipschitz continuous functions has the potential to reduce the variance since there exist favorable concentration inequality results for the Lipschitz space (Wainwright, 2019). In the GAN literature, Wasserstein GAN (Gulrajani et al., 2017) and spectral normalization (Miyato et al., 2018) impose Lipschitz continuity which resulted in signigicant gains in terms of training stability. Similarly, the restriction of test functions to an appropriately designed reproducing kernel Hilbert space could reduce the variance (Sreekar et al., 2020). Such approaches can be combined with our proposed variance penalties, as our formulation allows for general test-function spaces. However, we do not focus on this point here.
Given the importance of MI, several estimators aim towards improved statistical properties. Lower bounds such as MINE (Belghazi et al., 2018), which uses the DV variational formula with an expo-
nential moving average, NWJ estimator (Nguyen et al., 2010) and BA estimator (Barber & Agakov, 2003) as well as upper bounds such as CLUB (Cheng et al., 2020) still have high variance. InfoNCE (van den Oord et al., 2018) is one of the few MI estimators that has low variance, but at the cost of either high bias or high computational cost due to the need for many negative samples and thus large batch size. Poole et al. (2019) and Guo et al. (2021) aim to clarify the relationships and trade-offs between those variational bounds. A different approach to reducing variance is by appropriately working on the gradients of the objective function (Wen et al., 2020; 2021).
Finally, we discuss the approach of truncating the test function inside a bounded region as proposed in (Song & Ermon, 2019). The determination of the truncation threshold is quite difficult since it requires an a priori understanding of the log-likelihood ratio. Moreover, a high truncation threshold will not affect the estimation since a high threshold implies no real benefit in terms of variance reduction. On the other hand, a low threshold will result in large bias. Overall, using a high truncation threshold in order to avoid extreme values is a good practice even though it will have a limited impact on variance reduction.
2 BACKGROUND ON VARIATIONAL FORMULAS FOR RÉNYI AND f -DIVERGENCES.
While our variance reduction method can be applied to any divergence that possesses a variational formula, here our focus will be on the Rényi and f -divergences, including the KL divergence. For Rényi divergences an appropriate objective functional can be constructed from a difference of cumulant generating functions (Birrell et al., 2020b)
Rα(Q‖P ) = sup g∈Mb(Ω)
{ 1
α− 1 logEQ[e(α−1)g]−
1 α logEP [eαg]
} , α 6= 0, 1 . (1)
Here Q and P are probability distributions on the set Ω, EQ and EP denote the expectations with respect to Q and P respectively, andMb(Ω) is the space of bounded measurable real-valued functions on Ω. For f divergences, f being a lower semicontinuous convex function with f(1) = 0, one has the well-known Legendre transform variational formula (Broniatowski & Keziou, 2006; Nguyen et al., 2010)
Df (Q‖P ) = sup g∈Mb(Ω) {EQ[g]− EP [f∗(g)]} , (2)
where f∗(y) = supx∈R{yx − f(x)} is the Legendre transform of f . Here and in the following, the function of g that is being optimized will be called the objective functional. Equation (2) can be generalized to the (f,Γ)-divergences (Birrell et al., 2020a), where Γ ⊂ Mb(Ω) is a restricted test-function space
DΓf (Q‖P ) = sup g∈Γ {EQ[g]− ΛPf [g]} , (3)
ΛPf [g] = inf ν∈R {ν + EP [f∗(g − ν)]} . (4)
In particular, if fKL(x) = x log(x) corresponds to the KL divergence then
ΛPfKL [g] = log(EP [exp(g)]) ≡ Λ P [g] (5)
is the classical cumulant generating function and equation (3) (with Γ = Mb(Ω)) becomes the Donsker-Varadhan variational formula (Dupuis & Ellis., 1997, Appendix C.2)
DKL(Q‖P ) = sup g∈Mb(Ω) {EQ[g]− logEP [eg]} . (6)
For general f , we will often write equation (3) as
DΓf (Q‖P ) = sup g∈Γ,ν∈R {EQ[g − ν]− EP [f∗(g − ν)]} (7)
and if Γ is closed under the shifts g 7→ g − ν, ν ∈ R then we can write it simply as DΓf (Q‖P ) = sup
g∈Γ {EQ[g]− EP [f∗(g)]} . (8)
In particular, if Γ = Mb(Ω) then DΓf = Df . The generalizations of Rényi and KL divergence obtained by using a restricted space Γ in place of Mb(Ω) in equation (1) or equation (6) will be denoted by RΓα and D Γ KL, respectively.
3 STATISTICAL ESTIMATORS AND VARIANCE REDUCTION
Variational representations of divergences are especially useful for creating statistical estimators in a data-driven setting; a naive estimator is obtained by simply replacing expectations with the corresponding statistical averages in any of the equations (1), (2), (3), etc. More formally, the naive estimators can be written as DΓf (Qn‖Pn), RΓα(Qn‖Pn), etc., where Γ is some parameterized space of functions (e.g., a neural network), Qn and Pn are the n-sample empirical measures from Q and P respectively (i.e., EPn [g] = 1n ∑n j=1 g(Xj) where Xj are i.i.d. samples from P and similarly for EQn ; we also assume that the samples from Q and P are independent of one another), and the divergences are expressed in terms of the variational formulas from Section 2. However, in practice these naive methods often suffer from high variance (Song & Ermon, 2019; Birrell et al., 2020b). We address this via variance-penalized divergences, which are constructed by introducing a variance penalty into the objective functional of the variational representation, e.g.,
Dλf (Q‖P ) ≡ sup g∈Mb(Ω) {EQ[g]− EP [f∗(g)]− λV [g;Q,P ]} , (9)
where the variance penalty, λV , is proportional to the variance of EQn [g]−EPn [f∗(g)] with strength λ > 0. Using this, we construct the following divergence estimator
sup η {EQn [gη]− EPn [f∗(gη)]− λV [gη;Qn, Pn]} , (10)
where gη is a neural network with parameters η. Similar variance penalties can be derived to other divergences with variational representations.
3.1 VARIANCE PENALTY
In this subsection we provide details on the variance penalty for (f,Γ)-divergences, the KLdivergence, and Rényi divergences. The same framework can be repeated to other divergences with a variational representation.
To introduce the variance penalty, first consider the (f,Γ)-divergence representation equation (8). Our goal is to penalize g’s for which EQn [g] or EPn [f∗(g)] have large variance, hence we introduce a penalty term proportional to (VarQ denotes variance with respect to Q, etc.)
Var [EQn [g] + EPn [f∗(g)]] = 1
n (VarQ[g] + VarP [f∗(g)]) . (11)
Specifically, for λ > 0 we define the variance-penalized (f,Γ)-divergence
DΓ,λf (Q‖P ) ≡ sup g∈Γ,ν∈R {EQ[g − ν]− EP [f∗(g − ν)]− λ(VarQ[g − ν] + VarP [f∗(g − ν)])} .
(12)
As noted above, if Γ is invariant under constant shifts then the optimization over ν can be omitted. A similar result to equation (12) can be derived for any objective functional that is a linear combination of expectations, e.g., integral probability metrics (Müller, 1997; Sriperumbudur et al., 2012) such as the Wasserstein metric.
For nonlinear objective functional terms of the generic form G(EP [h(g)]), such as appear in equation (1) and equation (6), we cannot compute the variance of the corresponding statistical estimator at finite n but we can use the delta method to obtain the asymptotic variance
lim n→∞
nVar [G(EPn [h(g)])] = (G′(EP [h(g)]))2VarP [h(g)] . (13)
Thus, we propose for the nonlinear case to use the above asymptotic variance as a penalty and obtain the following variance-penalized KL and Rényi divergence variational formulas:
DΓ,λKL (Q‖P ) ≡ sup g∈Γ
{ EQ[g]− logEP [eg]− λ ( VarQ[g] + VarP [eg]/(EP [eg])2 )} , (14)
RΓ,λα (Q‖P ) ≡ sup g∈Γ
{ 1
α− 1 logEQ[e(α−1)g]−
1 α logEP [eαg] (15)
− λ (
1 (α− 1)2 VarQ[e(α−1)g] (EQ[e(α−1)g])2 + 1 α2 VarP [eαg] (EP [eαg])2
)} .
Remark 1. Both equation (11) and equation (13) suggest that the statistical estimators for the above penalized divergences should use a variance penalty strength that decays with the sample size λ = λ0/n, though other forms of n-dependence may be useful in practice.
Though the variance penalty introduces bias, as λ → 0 the penalized divergence converges to the corresponding non-penalized divergence, as made precise by the following theorem. Theorem 2. Let Γ ⊂Mb(Ω). We have the following convergence results:
lim λ→0+
DΓ,λKL (Q‖P ) = D Γ KL(Q‖P ) , (16)
lim λ→0+
RΓ,λα (Q‖P ) = RΓα(Q‖P ) , (17)
and if f∗(y) <∞ for all y ∈ R then
lim λ→0+
DΓ,λf (Q‖P ) = D Γ f (Q‖P ) . (18)
Moreover, under fairly general assumptions it holds that
lim λ→∞ DΓ,λKL (Q‖P ) = lim λ→∞ RΓ,λα (Q‖P ) = lim λ→∞ DΓ,λf (Q‖P ) = 0 . (19)
Remark 3. Note that the corresponding statistical estimators, DΓ,λf (Qn‖Pn), etc., have additional bias due to the supremum over g. We present partial results on bias bounds in Appendix D.
The proof of Theorem 2 is given in Appendix B for the zero limit and Theorem 9 for the infinity limit. The same proof techniques can be applied to other divergences with a variational characterization.
Finally, for non-zero λ the penalized divergences (12), (14), (15) retain the divergence property and are therefore appropriate for quantifying the “distance” between probability distributions: Theorem 4. Under fairly general assumptions on f and Γ (see Appendix B for details) and letting DΓ,λ denote any of DΓ,λf , D Γ,λ KL , or R Γ,λ α we have D
Γ,λ(Q‖P ) ≥ 0 and DΓ,λ(Q‖P ) = 0 if and only if Q = P .
The proof of Theorem 4 can be found in Appendix B.
3.2 VARIANCE-REDUCED DIVERGENCE ESTIMATION ALGORITHM
We now propose the following divergence neural estimation (DNE) methods with variance penalty, generalizing equations (9)-(10). (DNE-VPλ) sup
η {H[gη;Qn, Pn]− λV [gη;Qn, Pn]} . (20)
We will compare the above method to the non-penalized estimator (i.e., with λ = 0) (DNE) sup
η H[gη;Qn, Pn] . (21)
In the above, the test function space is a neural network Γ = {gη, η ∈ E} with parameters η and H denotes the objective functional of the divergence, e.g., for the Rényi divergences (1)
Hα[g;Q,P ] = 1
α− 1 logEQ[e(α−1)g]−
1 α logEP [eαg] , α 6= 0, 1 (22)
and for f divergences (2) Hf [g;Q,P ] = EQ[g]− EP [f∗(g)] . (23)
Finally, V is the variance penalty corresponding to the chosen divergence (see Section 3.1), e.g., for Rényi divergences
Vα[g;Qn, Pn] = 1 (α− 1)2 VarQn [e(α−1)g] (EQn [e(α−1)g])2 + 1 α2 VarPn [eαg] (EPn [eαg])2 , α 6= 0, 1 (24)
and for f divergences Vf [g;Qn, Pn] = VarQn [g] + VarPn [f∗(g)] . (25)
We solve equation (20) via Adam algorithm (Kingma & Ba, 2014); a stochastic gradient descent method.
4 RESULTS ON SYNTHETIC DATASETS
Figure 1 presents the statistical estimation of Rényi divergence between two one-dimensional Gaussians which both have zero mean but different variance values. The order of Rényi divergence, α, controls how much weight to put on the tails of the distributions, thus it can become very sensitive to the few samples from the tails. The same conclusion can be deduced from the variational formula (i.e., equation (1) where α multiplies the exponentials’ argument). Therefore, a larger α value implies larger statistical variance. Indeed, high estimation variance is observed with DNE (upper leftmost panel of Figure 1) despite the fact that we applied truncation as proposed by Song & Ermon (2019) with truncation threshold set to 1. In contrast, the DNE-VPλ estimator with λ = 0.1 greatly reduces the statistical variance even when α is large (lower leftmost panel). For fairness, we imposed the same truncation operation in the output of DNE-VPλ. We report a 80% reduction of variance for α = 2 which becomes 99% for α = 10.
The proposed approach introduces an additional hyper-parameter, λ, which controls the strength of the VP. Our theory suggests that λ should depend on the sample size (and perhaps also on the other parameters), therefore we perform two sets of experiments. In the first experiment, we explore the range of optimal values for λ in terms of MedAE1. As is evident from the middle panels of Figure 1, λ-values in the vicinity of 0.1 are a reasonable compromise between variance and bias. In the second experiment, we demonstrate the performance in terms of MedAE as a function of the sample size, N . As suggested in Remark 1, monotone performance is obtained when λ is inversely proportional to N (blue dashed line in rightmost upper panel of Figure 1).
Our second synthetic example constitutes the estimation of MI using various approaches with and without VP. Here, we let Q be a zero-mean multivariate correlated Gaussian random vector of dimension d. We impose element-wise correlation, i.e., corr(xi, x d
2 +j ) = δi,jρ to the samples x ∼ Q
1Recall that MedAE stands for median absolute error and it is a more robust-to-outliers metric.
where i, j = 1, . . . , d2 and δi,j is Kronecker’s delta. With P we denote the product of the marginals, which in this case is simply a zero-mean standardized multivariate Gaussian. Figure 2 presents the estimated MI per training step. We consider the Renyi-based MI with α = 0.5 as well as the standard MI using the DV variational formula. Notice that these two variants result in different true values (black lines in Figure 2). The plotted results demonstrate the successful reduction of variance when VP is added to the objective functional. Interestingly, the extension of VP to InfoNCE and CLUB estimators (second row of panels in Figure 2) implies that our approach can be applied to any MI estimators, thus offering a general variance reduction framework. Bias, variance and MSE plots as well as several more experiments can be found in Appendix F.
5 REAL DATA APPLICATIONS
5.1 DETECTING RARE BIOLOGICAL SUB-POPULATIONS
Using the dataset from Levine et al. (2015), we test the efficacy of DNE-VPλ in discriminating cell populations which are contaminated with a rare sub-population with distinguishable statistical properties. Specifically, we consider single-cell mass cytometry measurements on 16 bone marrow protein markers2 (i.e., d = 16) coming from healthy and diseased individuals with acute myeloid leukemia. For each run we created three subsets of healthy samples with sample size N = 20K which we denote by P and one dataset as a mixture of 99% healthy and 1% diseased samples which is denoted by Q. Notice that the actual number of diseased samples is only 200 thus it is considered as a rare sub-population.
For Rényi divergence with α = 0.5 (left panels in Figure 3), both DNE and DNE-VPλ are stable. Despite the improvement in the separation of the two histograms, the observed variance reduction
2Data was accessed from https://community.cytobank.org/cytobank/experiments/ 46098/illustrations/121588
of DNE-VPλ is minimal and not enough to discriminate between the healthy and the contaminated with 1% diseased samples distributions. When considering Rényi divergence with α = 1.1, we observe that DNE fails to produce stable estimates. In contrast, DNE-VPλ always computes stable estimates. Additionally, the two histograms are satisfactorily separated, implying that larger values of α are crucial, provided there is a way to handle the statistical variance. For completeness, Table 1 reports the first and second order statistics of the histograms shown in Figure 3.
5.2 DISENTANGLED REPRESENTATION LEARNING IN SPEECH SYNTHESIS
An important application of MI is disentangled representation learning. In the context of representation disentanglement, the extraction of meaningful latent features for high-dimensional data is challenging, especially when explicit knowledge needs to be distilled into interpretable representations. One popular approach to enforce representation disentanglement is via MI minimization. Moreover, a superior disentanglement will allow a greater degree of interpretability and controllability, especially for generative models maintaining high production capacity. In this section, we employ the proposed DNE-VPλ estimator for MI estimation in order to learn disentangled representation, and, particularly, in the context of speech synthesis and analysis.
A universal text-to-speech synthesizer can generate speech from text with speaker factor and speaking style similar to a reference signal. Previous works aimed to encode the information from reference speech into a fixed-length style and speaker embedding using trainable encoders (Wang et al., 2018; Tjandra et al., 2020; Chien et al., 2021; Tan et al., 2021). The major challenges for such speech synthesizers are controllability and generalisability, especially when trying to generalize the models with multiple speakers and multiple styles. During training, content information is leaked into the style embeddings (“content leakage”) and speaker information into style embeddings (“style leakage”). Thus at inference, when the reference speech has different content from the input text, the decoder expects the content from the style vector ignoring some part of the content text. Moreover, speaker information could be expected from the style encoder leading to completely different speaker attribute. To alleviate that, Paul et al. (2021) suggested replacing the KL-based MI with Rényi-based MI and minimizing the Rényi divergence between the joint distribution and the product of marginals for the content-style and style-speaker pairs. However, reliable estimation of Rényi divergence was problematic due to high statistical variance. Taking advantage of the proposed variance reduction technique, we employ a VP term in the loss function which is denoted as DNE-VPλ (Rα). By doing so, content, style, and speaker spaces become representative and (ideally) independent of each other. We introduce two variations of this framework: sum of three Rényi divergences DNE(Rα=0 + Rα=0.5 + Rα=1) (i.e., sum of the corresponding objective functionals) and DNE(Rα=0.5). We tested several different λ values, aiming to reduce the statistical variance of the adversarial component. Notice that larger λ values were helpful in this application.
We evaluate the performance of disentanglement strategies using three performance scores from 100 random samples shown in Table 2. Mel-cepstral distortion (MCD) measures the spectral distance between the synthesized and reference mel-spectrum features. Root mean squared error (RMSE) evaluates the similarity in F0 modeling between reference and synthesized speech. Lastly, the content preservation criterion is evaluated by word error rate (WER). During inference, we evaluate the performance on two conditions: ‘no shuffle’ and ‘shuffle’. During inference, ‘no shuffle’ feeds the same reference speech into style and speaker encoders and its corresponding text to predict the speech features, whereas ‘shuffle’ feeds random speech. We observe that the proposed DNE-VPλ variants outperform baseline approaches without VP in terms of all evaluation metrics. Our proposed systems greatly reduced content leakage by improving the word error rate by approximately 5-18% relative to the baseline systems. Furthermore, RMSE-F0 and MCD scores show that the disentanglement module during training assists the TTS to achieve more accurate rendering of prosodic patterns as well as synthesizing proper speech content to its corresponding text without any significant leakage issues. | 1. How does the proposed approach manipulate divergence estimators?
2. What are the strengths and weaknesses of the proposed approach regarding reducing variance?
3. Are there any concerns or questions regarding the notations and derivations used in the paper?
4. Do you think the proof for Theorem 4 is sufficient? Why or why not?
5. How would you address the issues with Algorithm 1 and its simplification?
6. Can you explain the trade-off between bias and variance in the proposed approach?
7. What are your thoughts on the second synthetic experiment and its problematic aspects?
8. Were the model architectures and datasets used in the speech synthesis experiments properly introduced?
9. How can we ensure that the estimators used in the experiments are consistent?
10. What impact do the variances penalty terms have on the overall performance of the system? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes to adopt a variance penalty term to manipulate the divergence estimators. Experiments first investigated the correlations of the variance penalty and the bias and variance trade-offs. Then, the authors tested the proposed approaches on two real-world application (biological and speech synthesis) datasets. The results show that the proposed approach can effectively reduce the variance as compared to systems without including the variance penalty.
Review
(1) In Section 3.1,
G
in Eq. (13) seems to override
f
∗
in previous equations, and
h
is defined in the context. It is vague to understand how to derive Eq. (14) and Eq. (15) from Eq. (13). The authors should carefully take care of notations. For example, from (13) to (14), the author should at least denote
G
=
f
∗
and
h
=
exp
. (2) Theorem 1 is missing. Theorem 2 could be inappropriate and insufficient to be considered as a theorem. (3) The assumption for Theorem 4. is not well described. The authors defer the details in Appendix B., but the proof which the author indicated only proves the divergence property of Renyi divergences instead of other divergences. (4) In Section 3.2,
E
is not defined and is ambiguous with
E
which represents the expectation. The author should consider using other notations. (5) Algorithm 1 is overly simplified and incorrect. Line 3 in Algorithm 1 implies that the proposed variance penalty is not included during optimization; therefore the variances would not be suppressed. Also, since
α
is used by Renyi divergence, the authors should denote the parameter of Adam by another symbol other than
α
. (6) Figure 1 did not comprehensively analyze the trade-off of bias and variance. In addition, it is necessary to explain why the authors choose MedAE instead of MSE. (7) The trade-off between bias and variance is unsatisfactory. For instance, VP does not affect InfoNCE. Moreover, to greatly reduce variance, DNE-VP exhibits large biases. (8) The second synthetic experiment seems problematic. First, as we know that InfoNCE is bounded above by
log
N
, where
N
is 64 according to the caption in Figure 2. Therefore, InfoNCE should saturate to about 4.159, but the authors' experiment shows that the estimated MI saturates below 4. Second, the author examined estimators under different scales of the ground-truths. (9) The authors might have overlooked that as they adopted the Delta method, the estimators should be guaranteed to be
c
o
n
s
i
s
t
e
n
t
. The authors did not prove that all estimators included in their experiments are consistent. (10) The authors did not introduce the model architectures and datasets used in the speech synthesis experiments (for both speech synthesizer and automatic speech recognition systems). Without such information, readers cannot reproduce the results easily, and thus the results reported in the paper are not convincing. |
ICLR | Title
A Variance Reduction Method for Neural-based Divergence Estimation
Abstract
A central problem in machine learning is the computation of similarity or closeness between two (data) distributions. The applications span from generative modelling via adversarial training, representation learning, and robustness in out-ofdistribution settings, to name a few. A palette of divergences, mutual information, and integral probability metrics are indispensable tools for measuring the “distance” between distributions and these are made tractable in high dimensional settings through variational representation formulas. Indeed, such formulas transform an estimation problem into an optimization problem. Unfortunately, the approximation of expectations that are inherent in variational formulas by statistical averages can be problematic due to high statistical variance, e.g., exponential for the Kullback-Leibler divergence and certain estimators. In this paper, we propose a new variance penalty term that acts directly on the variance of each component of the statistical estimator. The power of the variance penalty is controlled by a penalty coefficient which trades off bias and variance. We tested the proposed approach on several variational formulas and synthetic examples and showed that the overall error is decreased about an order of magnitude relative to the baseline statistical estimator. Impressive results are obtained for Rényi divergence with large order values due to the improved stability of the proposed estimator. Furthermore, in real biological datasets we are able to detect very rare sub-populations with a moderate sample size. Finally, we obtain improved (in terms of objective measures) disentangled representation of speech signals into text, speaker, and style components via variance-penalized mutual information minimization.
1 INTRODUCTION
Divergences such as Kullback-Leibler (KL) divergence, f -divergences, Hellinger divergence, αdivergences and Rényi divergences, which were initially developed in the fields of information theory and statistical physics, are indispensable tools in a growing number of machine learning applications. They have been used in adversarial training of generative models (Goodfellow et al., 2014; Nowozin et al., 2016), in the estimation of generalization errors (Esposito et al., 2021) and hypothesis testing (Broniatowski & Keziou, 2009), to name a few. Mutual information (MI), in particular, which is defined as the KL divergence between the joint distribution of a pair of variables and their marginals (and can be generalized to divergences other than KL), plays a crucial role in Bayesian networks and (conditional) independence (Cheng et al., 2002), self-supervised learning via contrastive losses (van den Oord et al., 2018; Le-Khac et al., 2020) as well as in representation learning (Hjelm et al., 2019; Chen et al., 2016).
Classical divergence estimators perform reasonably well for low dimensional cases, however they scale poorly to large, high dimensional datasets which are typically encountered in modern machine learning. The most compelling estimation approach of a divergence is via the optimization of a lower variational bound parametrized by neural networks. These lower bounds, which are likelihood-free approximations, are maximized in order to compute the divergence value at the optimizer. Wellknown variational representations are the Legendre transformation of an f -divergence (Broniatowski & Keziou, 2006; Nguyen et al., 2010) as well as the Donsker-Varadhan (DV) variational formula (Donsker & Varadhan, 1983) for KL divergence and its extension to Rényi divergence (Birrell et al., 2020b). Their tractability stems from their objective functionals, which are computed from expected values and approximated using statistical averages from the available or generated samples.
Despite the scalability and tractability, the estimation of a divergence based on variational formulas is a notoriously difficult problem. One challenge stems from the potentially high bias, since any approximation for the worst case scenario requires an exponential number of samples in order to attain the true divergence value (McAllester & Stratos, 2020). Additionally, the statistical variance, which scales exponentially with respect to the divergence’s value for certain variational estimators (Song & Ermon, 2019), is often prohibitively high. Focusing on the elevated MI, there are several further lower bounds (Barber & Agakov, 2003; Belghazi et al., 2018; van den Oord et al., 2018; Poole et al., 2019; Guo et al., 2021) and a few upper bounds (Cheng et al., 2020; Poole et al., 2019) which aim to provide more reliable estimates of MI in the low sample size regime. However, the majority of these MI estimators are not transferable to the general estimation of divergences and frequently produce instabilities during training which are further magnified by the small batch and/or sample size.
In this paper, we propose to reduce a divergence estimator’s variance via an explicit variance penalty (VP) which is added to the objective functional. Our contributions are summarized as follows:
• We present a novel variance reduction penalty for f -divergence and expand it via the delta method to the nonlinear setting, including the DV formula for KL divergence as well as the variational formula for the Rényi divergences. The proposed VP is able to flexibly trade off bias and variance.
• We present numerical evidence on synthetic datasets that the proposed approach improves both mean squared error (MSE) and median absolute error (MedAE) in a range of sample sizes and types of divergences. Furthermore, we implemented the proposed VP in several other lower and upper bounds of MI, showing that our variance reduction approach is not restricted to particular variational formulas but it is generic and applicable to the majority of existing variational representations.
• When applied to real datasets, we demonstrate the ability of the proposed approach to reduce the variance of the estimated Rényi divergence, thus enabling the detection of rare biological sub-populations which are otherwise difficult to identify. Interestingly, the baseline estimator is unstable when the order value is above one, but it becomes stable when the VP is added.
• We also applied the VP to the disentangled representation learning of speech into its text, speaker, and style components. Results on objective evaluation metrics showed that the addition of the VP generally improves the training performance, as much as 18% relative to the baseline systems.
1.1 RELATED WORK
There are several general-purpose variance reduction techniques in Monte Carlo stochastic sampling, with the most popular approaches being antithetic sampling or more broadly coupling methods, control of variates and importance sampling (Robert & Casella, 2005; Glasserman, 2004; Srinivasan, 2013). These methods have not been explicitly applied for the variational divergence estimation problem. We speculate that either they are not applicable due to the unavailability of analytical probability density formulas or they are inefficient (e.g., the control of variates approach requires a second estimator and potentially a second parametric model in order to be applied).
Another way to reduce the variance is to restrict the function space to more smooth and/or controlled test (or critic) functions, balancing again between bias and variance. For instance, the restriction to Lipschitz continuous functions has the potential to reduce the variance since there exist favorable concentration inequality results for the Lipschitz space (Wainwright, 2019). In the GAN literature, Wasserstein GAN (Gulrajani et al., 2017) and spectral normalization (Miyato et al., 2018) impose Lipschitz continuity which resulted in signigicant gains in terms of training stability. Similarly, the restriction of test functions to an appropriately designed reproducing kernel Hilbert space could reduce the variance (Sreekar et al., 2020). Such approaches can be combined with our proposed variance penalties, as our formulation allows for general test-function spaces. However, we do not focus on this point here.
Given the importance of MI, several estimators aim towards improved statistical properties. Lower bounds such as MINE (Belghazi et al., 2018), which uses the DV variational formula with an expo-
nential moving average, NWJ estimator (Nguyen et al., 2010) and BA estimator (Barber & Agakov, 2003) as well as upper bounds such as CLUB (Cheng et al., 2020) still have high variance. InfoNCE (van den Oord et al., 2018) is one of the few MI estimators that has low variance, but at the cost of either high bias or high computational cost due to the need for many negative samples and thus large batch size. Poole et al. (2019) and Guo et al. (2021) aim to clarify the relationships and trade-offs between those variational bounds. A different approach to reducing variance is by appropriately working on the gradients of the objective function (Wen et al., 2020; 2021).
Finally, we discuss the approach of truncating the test function inside a bounded region as proposed in (Song & Ermon, 2019). The determination of the truncation threshold is quite difficult since it requires an a priori understanding of the log-likelihood ratio. Moreover, a high truncation threshold will not affect the estimation since a high threshold implies no real benefit in terms of variance reduction. On the other hand, a low threshold will result in large bias. Overall, using a high truncation threshold in order to avoid extreme values is a good practice even though it will have a limited impact on variance reduction.
2 BACKGROUND ON VARIATIONAL FORMULAS FOR RÉNYI AND f -DIVERGENCES.
While our variance reduction method can be applied to any divergence that possesses a variational formula, here our focus will be on the Rényi and f -divergences, including the KL divergence. For Rényi divergences an appropriate objective functional can be constructed from a difference of cumulant generating functions (Birrell et al., 2020b)
Rα(Q‖P ) = sup g∈Mb(Ω)
{ 1
α− 1 logEQ[e(α−1)g]−
1 α logEP [eαg]
} , α 6= 0, 1 . (1)
Here Q and P are probability distributions on the set Ω, EQ and EP denote the expectations with respect to Q and P respectively, andMb(Ω) is the space of bounded measurable real-valued functions on Ω. For f divergences, f being a lower semicontinuous convex function with f(1) = 0, one has the well-known Legendre transform variational formula (Broniatowski & Keziou, 2006; Nguyen et al., 2010)
Df (Q‖P ) = sup g∈Mb(Ω) {EQ[g]− EP [f∗(g)]} , (2)
where f∗(y) = supx∈R{yx − f(x)} is the Legendre transform of f . Here and in the following, the function of g that is being optimized will be called the objective functional. Equation (2) can be generalized to the (f,Γ)-divergences (Birrell et al., 2020a), where Γ ⊂ Mb(Ω) is a restricted test-function space
DΓf (Q‖P ) = sup g∈Γ {EQ[g]− ΛPf [g]} , (3)
ΛPf [g] = inf ν∈R {ν + EP [f∗(g − ν)]} . (4)
In particular, if fKL(x) = x log(x) corresponds to the KL divergence then
ΛPfKL [g] = log(EP [exp(g)]) ≡ Λ P [g] (5)
is the classical cumulant generating function and equation (3) (with Γ = Mb(Ω)) becomes the Donsker-Varadhan variational formula (Dupuis & Ellis., 1997, Appendix C.2)
DKL(Q‖P ) = sup g∈Mb(Ω) {EQ[g]− logEP [eg]} . (6)
For general f , we will often write equation (3) as
DΓf (Q‖P ) = sup g∈Γ,ν∈R {EQ[g − ν]− EP [f∗(g − ν)]} (7)
and if Γ is closed under the shifts g 7→ g − ν, ν ∈ R then we can write it simply as DΓf (Q‖P ) = sup
g∈Γ {EQ[g]− EP [f∗(g)]} . (8)
In particular, if Γ = Mb(Ω) then DΓf = Df . The generalizations of Rényi and KL divergence obtained by using a restricted space Γ in place of Mb(Ω) in equation (1) or equation (6) will be denoted by RΓα and D Γ KL, respectively.
3 STATISTICAL ESTIMATORS AND VARIANCE REDUCTION
Variational representations of divergences are especially useful for creating statistical estimators in a data-driven setting; a naive estimator is obtained by simply replacing expectations with the corresponding statistical averages in any of the equations (1), (2), (3), etc. More formally, the naive estimators can be written as DΓf (Qn‖Pn), RΓα(Qn‖Pn), etc., where Γ is some parameterized space of functions (e.g., a neural network), Qn and Pn are the n-sample empirical measures from Q and P respectively (i.e., EPn [g] = 1n ∑n j=1 g(Xj) where Xj are i.i.d. samples from P and similarly for EQn ; we also assume that the samples from Q and P are independent of one another), and the divergences are expressed in terms of the variational formulas from Section 2. However, in practice these naive methods often suffer from high variance (Song & Ermon, 2019; Birrell et al., 2020b). We address this via variance-penalized divergences, which are constructed by introducing a variance penalty into the objective functional of the variational representation, e.g.,
Dλf (Q‖P ) ≡ sup g∈Mb(Ω) {EQ[g]− EP [f∗(g)]− λV [g;Q,P ]} , (9)
where the variance penalty, λV , is proportional to the variance of EQn [g]−EPn [f∗(g)] with strength λ > 0. Using this, we construct the following divergence estimator
sup η {EQn [gη]− EPn [f∗(gη)]− λV [gη;Qn, Pn]} , (10)
where gη is a neural network with parameters η. Similar variance penalties can be derived to other divergences with variational representations.
3.1 VARIANCE PENALTY
In this subsection we provide details on the variance penalty for (f,Γ)-divergences, the KLdivergence, and Rényi divergences. The same framework can be repeated to other divergences with a variational representation.
To introduce the variance penalty, first consider the (f,Γ)-divergence representation equation (8). Our goal is to penalize g’s for which EQn [g] or EPn [f∗(g)] have large variance, hence we introduce a penalty term proportional to (VarQ denotes variance with respect to Q, etc.)
Var [EQn [g] + EPn [f∗(g)]] = 1
n (VarQ[g] + VarP [f∗(g)]) . (11)
Specifically, for λ > 0 we define the variance-penalized (f,Γ)-divergence
DΓ,λf (Q‖P ) ≡ sup g∈Γ,ν∈R {EQ[g − ν]− EP [f∗(g − ν)]− λ(VarQ[g − ν] + VarP [f∗(g − ν)])} .
(12)
As noted above, if Γ is invariant under constant shifts then the optimization over ν can be omitted. A similar result to equation (12) can be derived for any objective functional that is a linear combination of expectations, e.g., integral probability metrics (Müller, 1997; Sriperumbudur et al., 2012) such as the Wasserstein metric.
For nonlinear objective functional terms of the generic form G(EP [h(g)]), such as appear in equation (1) and equation (6), we cannot compute the variance of the corresponding statistical estimator at finite n but we can use the delta method to obtain the asymptotic variance
lim n→∞
nVar [G(EPn [h(g)])] = (G′(EP [h(g)]))2VarP [h(g)] . (13)
Thus, we propose for the nonlinear case to use the above asymptotic variance as a penalty and obtain the following variance-penalized KL and Rényi divergence variational formulas:
DΓ,λKL (Q‖P ) ≡ sup g∈Γ
{ EQ[g]− logEP [eg]− λ ( VarQ[g] + VarP [eg]/(EP [eg])2 )} , (14)
RΓ,λα (Q‖P ) ≡ sup g∈Γ
{ 1
α− 1 logEQ[e(α−1)g]−
1 α logEP [eαg] (15)
− λ (
1 (α− 1)2 VarQ[e(α−1)g] (EQ[e(α−1)g])2 + 1 α2 VarP [eαg] (EP [eαg])2
)} .
Remark 1. Both equation (11) and equation (13) suggest that the statistical estimators for the above penalized divergences should use a variance penalty strength that decays with the sample size λ = λ0/n, though other forms of n-dependence may be useful in practice.
Though the variance penalty introduces bias, as λ → 0 the penalized divergence converges to the corresponding non-penalized divergence, as made precise by the following theorem. Theorem 2. Let Γ ⊂Mb(Ω). We have the following convergence results:
lim λ→0+
DΓ,λKL (Q‖P ) = D Γ KL(Q‖P ) , (16)
lim λ→0+
RΓ,λα (Q‖P ) = RΓα(Q‖P ) , (17)
and if f∗(y) <∞ for all y ∈ R then
lim λ→0+
DΓ,λf (Q‖P ) = D Γ f (Q‖P ) . (18)
Moreover, under fairly general assumptions it holds that
lim λ→∞ DΓ,λKL (Q‖P ) = lim λ→∞ RΓ,λα (Q‖P ) = lim λ→∞ DΓ,λf (Q‖P ) = 0 . (19)
Remark 3. Note that the corresponding statistical estimators, DΓ,λf (Qn‖Pn), etc., have additional bias due to the supremum over g. We present partial results on bias bounds in Appendix D.
The proof of Theorem 2 is given in Appendix B for the zero limit and Theorem 9 for the infinity limit. The same proof techniques can be applied to other divergences with a variational characterization.
Finally, for non-zero λ the penalized divergences (12), (14), (15) retain the divergence property and are therefore appropriate for quantifying the “distance” between probability distributions: Theorem 4. Under fairly general assumptions on f and Γ (see Appendix B for details) and letting DΓ,λ denote any of DΓ,λf , D Γ,λ KL , or R Γ,λ α we have D
Γ,λ(Q‖P ) ≥ 0 and DΓ,λ(Q‖P ) = 0 if and only if Q = P .
The proof of Theorem 4 can be found in Appendix B.
3.2 VARIANCE-REDUCED DIVERGENCE ESTIMATION ALGORITHM
We now propose the following divergence neural estimation (DNE) methods with variance penalty, generalizing equations (9)-(10). (DNE-VPλ) sup
η {H[gη;Qn, Pn]− λV [gη;Qn, Pn]} . (20)
We will compare the above method to the non-penalized estimator (i.e., with λ = 0) (DNE) sup
η H[gη;Qn, Pn] . (21)
In the above, the test function space is a neural network Γ = {gη, η ∈ E} with parameters η and H denotes the objective functional of the divergence, e.g., for the Rényi divergences (1)
Hα[g;Q,P ] = 1
α− 1 logEQ[e(α−1)g]−
1 α logEP [eαg] , α 6= 0, 1 (22)
and for f divergences (2) Hf [g;Q,P ] = EQ[g]− EP [f∗(g)] . (23)
Finally, V is the variance penalty corresponding to the chosen divergence (see Section 3.1), e.g., for Rényi divergences
Vα[g;Qn, Pn] = 1 (α− 1)2 VarQn [e(α−1)g] (EQn [e(α−1)g])2 + 1 α2 VarPn [eαg] (EPn [eαg])2 , α 6= 0, 1 (24)
and for f divergences Vf [g;Qn, Pn] = VarQn [g] + VarPn [f∗(g)] . (25)
We solve equation (20) via Adam algorithm (Kingma & Ba, 2014); a stochastic gradient descent method.
4 RESULTS ON SYNTHETIC DATASETS
Figure 1 presents the statistical estimation of Rényi divergence between two one-dimensional Gaussians which both have zero mean but different variance values. The order of Rényi divergence, α, controls how much weight to put on the tails of the distributions, thus it can become very sensitive to the few samples from the tails. The same conclusion can be deduced from the variational formula (i.e., equation (1) where α multiplies the exponentials’ argument). Therefore, a larger α value implies larger statistical variance. Indeed, high estimation variance is observed with DNE (upper leftmost panel of Figure 1) despite the fact that we applied truncation as proposed by Song & Ermon (2019) with truncation threshold set to 1. In contrast, the DNE-VPλ estimator with λ = 0.1 greatly reduces the statistical variance even when α is large (lower leftmost panel). For fairness, we imposed the same truncation operation in the output of DNE-VPλ. We report a 80% reduction of variance for α = 2 which becomes 99% for α = 10.
The proposed approach introduces an additional hyper-parameter, λ, which controls the strength of the VP. Our theory suggests that λ should depend on the sample size (and perhaps also on the other parameters), therefore we perform two sets of experiments. In the first experiment, we explore the range of optimal values for λ in terms of MedAE1. As is evident from the middle panels of Figure 1, λ-values in the vicinity of 0.1 are a reasonable compromise between variance and bias. In the second experiment, we demonstrate the performance in terms of MedAE as a function of the sample size, N . As suggested in Remark 1, monotone performance is obtained when λ is inversely proportional to N (blue dashed line in rightmost upper panel of Figure 1).
Our second synthetic example constitutes the estimation of MI using various approaches with and without VP. Here, we let Q be a zero-mean multivariate correlated Gaussian random vector of dimension d. We impose element-wise correlation, i.e., corr(xi, x d
2 +j ) = δi,jρ to the samples x ∼ Q
1Recall that MedAE stands for median absolute error and it is a more robust-to-outliers metric.
where i, j = 1, . . . , d2 and δi,j is Kronecker’s delta. With P we denote the product of the marginals, which in this case is simply a zero-mean standardized multivariate Gaussian. Figure 2 presents the estimated MI per training step. We consider the Renyi-based MI with α = 0.5 as well as the standard MI using the DV variational formula. Notice that these two variants result in different true values (black lines in Figure 2). The plotted results demonstrate the successful reduction of variance when VP is added to the objective functional. Interestingly, the extension of VP to InfoNCE and CLUB estimators (second row of panels in Figure 2) implies that our approach can be applied to any MI estimators, thus offering a general variance reduction framework. Bias, variance and MSE plots as well as several more experiments can be found in Appendix F.
5 REAL DATA APPLICATIONS
5.1 DETECTING RARE BIOLOGICAL SUB-POPULATIONS
Using the dataset from Levine et al. (2015), we test the efficacy of DNE-VPλ in discriminating cell populations which are contaminated with a rare sub-population with distinguishable statistical properties. Specifically, we consider single-cell mass cytometry measurements on 16 bone marrow protein markers2 (i.e., d = 16) coming from healthy and diseased individuals with acute myeloid leukemia. For each run we created three subsets of healthy samples with sample size N = 20K which we denote by P and one dataset as a mixture of 99% healthy and 1% diseased samples which is denoted by Q. Notice that the actual number of diseased samples is only 200 thus it is considered as a rare sub-population.
For Rényi divergence with α = 0.5 (left panels in Figure 3), both DNE and DNE-VPλ are stable. Despite the improvement in the separation of the two histograms, the observed variance reduction
2Data was accessed from https://community.cytobank.org/cytobank/experiments/ 46098/illustrations/121588
of DNE-VPλ is minimal and not enough to discriminate between the healthy and the contaminated with 1% diseased samples distributions. When considering Rényi divergence with α = 1.1, we observe that DNE fails to produce stable estimates. In contrast, DNE-VPλ always computes stable estimates. Additionally, the two histograms are satisfactorily separated, implying that larger values of α are crucial, provided there is a way to handle the statistical variance. For completeness, Table 1 reports the first and second order statistics of the histograms shown in Figure 3.
5.2 DISENTANGLED REPRESENTATION LEARNING IN SPEECH SYNTHESIS
An important application of MI is disentangled representation learning. In the context of representation disentanglement, the extraction of meaningful latent features for high-dimensional data is challenging, especially when explicit knowledge needs to be distilled into interpretable representations. One popular approach to enforce representation disentanglement is via MI minimization. Moreover, a superior disentanglement will allow a greater degree of interpretability and controllability, especially for generative models maintaining high production capacity. In this section, we employ the proposed DNE-VPλ estimator for MI estimation in order to learn disentangled representation, and, particularly, in the context of speech synthesis and analysis.
A universal text-to-speech synthesizer can generate speech from text with speaker factor and speaking style similar to a reference signal. Previous works aimed to encode the information from reference speech into a fixed-length style and speaker embedding using trainable encoders (Wang et al., 2018; Tjandra et al., 2020; Chien et al., 2021; Tan et al., 2021). The major challenges for such speech synthesizers are controllability and generalisability, especially when trying to generalize the models with multiple speakers and multiple styles. During training, content information is leaked into the style embeddings (“content leakage”) and speaker information into style embeddings (“style leakage”). Thus at inference, when the reference speech has different content from the input text, the decoder expects the content from the style vector ignoring some part of the content text. Moreover, speaker information could be expected from the style encoder leading to completely different speaker attribute. To alleviate that, Paul et al. (2021) suggested replacing the KL-based MI with Rényi-based MI and minimizing the Rényi divergence between the joint distribution and the product of marginals for the content-style and style-speaker pairs. However, reliable estimation of Rényi divergence was problematic due to high statistical variance. Taking advantage of the proposed variance reduction technique, we employ a VP term in the loss function which is denoted as DNE-VPλ (Rα). By doing so, content, style, and speaker spaces become representative and (ideally) independent of each other. We introduce two variations of this framework: sum of three Rényi divergences DNE(Rα=0 + Rα=0.5 + Rα=1) (i.e., sum of the corresponding objective functionals) and DNE(Rα=0.5). We tested several different λ values, aiming to reduce the statistical variance of the adversarial component. Notice that larger λ values were helpful in this application.
We evaluate the performance of disentanglement strategies using three performance scores from 100 random samples shown in Table 2. Mel-cepstral distortion (MCD) measures the spectral distance between the synthesized and reference mel-spectrum features. Root mean squared error (RMSE) evaluates the similarity in F0 modeling between reference and synthesized speech. Lastly, the content preservation criterion is evaluated by word error rate (WER). During inference, we evaluate the performance on two conditions: ‘no shuffle’ and ‘shuffle’. During inference, ‘no shuffle’ feeds the same reference speech into style and speaker encoders and its corresponding text to predict the speech features, whereas ‘shuffle’ feeds random speech. We observe that the proposed DNE-VPλ variants outperform baseline approaches without VP in terms of all evaluation metrics. Our proposed systems greatly reduced content leakage by improving the word error rate by approximately 5-18% relative to the baseline systems. Furthermore, RMSE-F0 and MCD scores show that the disentanglement module during training assists the TTS to achieve more accurate rendering of prosodic patterns as well as synthesizing proper speech content to its corresponding text without any significant leakage issues. | 1. What is the focus of the paper, and what problem does it aim to solve?
2. What is the proposed solution to the problem, and how does it involve variance penalty?
3. How effective is the proposed method in reducing variance, and how does it compare to other methods?
4. Why was the choice made not to include other variance reduction techniques in the comparison?
5. Is there a specific reason for the absence of a conclusions section in the paper? | Summary Of The Paper
Review | Summary Of The Paper
In this paper authors propose to an new neural-based divergence estimator using variance penalty.
Review
In this paper a highly topical problem is tackled, how to obtain low-variance divergence estimator. Authors decide to use variance penalty. Idea is quite good and synthetic experiments do seem to validate that reduction works.
I have only following points:
About Fig 2, can you comment about why for CLUB VP does not seem to help so much as for DNE? Variance reduction, at least visually, seems to be quite minimal.
It appears to be that there were no other variance reduction techniques compared, why not to compare against for example control variates?
Why there is no Conclusions ? |
ICLR | Title
A Variance Reduction Method for Neural-based Divergence Estimation
Abstract
A central problem in machine learning is the computation of similarity or closeness between two (data) distributions. The applications span from generative modelling via adversarial training, representation learning, and robustness in out-ofdistribution settings, to name a few. A palette of divergences, mutual information, and integral probability metrics are indispensable tools for measuring the “distance” between distributions and these are made tractable in high dimensional settings through variational representation formulas. Indeed, such formulas transform an estimation problem into an optimization problem. Unfortunately, the approximation of expectations that are inherent in variational formulas by statistical averages can be problematic due to high statistical variance, e.g., exponential for the Kullback-Leibler divergence and certain estimators. In this paper, we propose a new variance penalty term that acts directly on the variance of each component of the statistical estimator. The power of the variance penalty is controlled by a penalty coefficient which trades off bias and variance. We tested the proposed approach on several variational formulas and synthetic examples and showed that the overall error is decreased about an order of magnitude relative to the baseline statistical estimator. Impressive results are obtained for Rényi divergence with large order values due to the improved stability of the proposed estimator. Furthermore, in real biological datasets we are able to detect very rare sub-populations with a moderate sample size. Finally, we obtain improved (in terms of objective measures) disentangled representation of speech signals into text, speaker, and style components via variance-penalized mutual information minimization.
1 INTRODUCTION
Divergences such as Kullback-Leibler (KL) divergence, f -divergences, Hellinger divergence, αdivergences and Rényi divergences, which were initially developed in the fields of information theory and statistical physics, are indispensable tools in a growing number of machine learning applications. They have been used in adversarial training of generative models (Goodfellow et al., 2014; Nowozin et al., 2016), in the estimation of generalization errors (Esposito et al., 2021) and hypothesis testing (Broniatowski & Keziou, 2009), to name a few. Mutual information (MI), in particular, which is defined as the KL divergence between the joint distribution of a pair of variables and their marginals (and can be generalized to divergences other than KL), plays a crucial role in Bayesian networks and (conditional) independence (Cheng et al., 2002), self-supervised learning via contrastive losses (van den Oord et al., 2018; Le-Khac et al., 2020) as well as in representation learning (Hjelm et al., 2019; Chen et al., 2016).
Classical divergence estimators perform reasonably well for low dimensional cases, however they scale poorly to large, high dimensional datasets which are typically encountered in modern machine learning. The most compelling estimation approach of a divergence is via the optimization of a lower variational bound parametrized by neural networks. These lower bounds, which are likelihood-free approximations, are maximized in order to compute the divergence value at the optimizer. Wellknown variational representations are the Legendre transformation of an f -divergence (Broniatowski & Keziou, 2006; Nguyen et al., 2010) as well as the Donsker-Varadhan (DV) variational formula (Donsker & Varadhan, 1983) for KL divergence and its extension to Rényi divergence (Birrell et al., 2020b). Their tractability stems from their objective functionals, which are computed from expected values and approximated using statistical averages from the available or generated samples.
Despite the scalability and tractability, the estimation of a divergence based on variational formulas is a notoriously difficult problem. One challenge stems from the potentially high bias, since any approximation for the worst case scenario requires an exponential number of samples in order to attain the true divergence value (McAllester & Stratos, 2020). Additionally, the statistical variance, which scales exponentially with respect to the divergence’s value for certain variational estimators (Song & Ermon, 2019), is often prohibitively high. Focusing on the elevated MI, there are several further lower bounds (Barber & Agakov, 2003; Belghazi et al., 2018; van den Oord et al., 2018; Poole et al., 2019; Guo et al., 2021) and a few upper bounds (Cheng et al., 2020; Poole et al., 2019) which aim to provide more reliable estimates of MI in the low sample size regime. However, the majority of these MI estimators are not transferable to the general estimation of divergences and frequently produce instabilities during training which are further magnified by the small batch and/or sample size.
In this paper, we propose to reduce a divergence estimator’s variance via an explicit variance penalty (VP) which is added to the objective functional. Our contributions are summarized as follows:
• We present a novel variance reduction penalty for f -divergence and expand it via the delta method to the nonlinear setting, including the DV formula for KL divergence as well as the variational formula for the Rényi divergences. The proposed VP is able to flexibly trade off bias and variance.
• We present numerical evidence on synthetic datasets that the proposed approach improves both mean squared error (MSE) and median absolute error (MedAE) in a range of sample sizes and types of divergences. Furthermore, we implemented the proposed VP in several other lower and upper bounds of MI, showing that our variance reduction approach is not restricted to particular variational formulas but it is generic and applicable to the majority of existing variational representations.
• When applied to real datasets, we demonstrate the ability of the proposed approach to reduce the variance of the estimated Rényi divergence, thus enabling the detection of rare biological sub-populations which are otherwise difficult to identify. Interestingly, the baseline estimator is unstable when the order value is above one, but it becomes stable when the VP is added.
• We also applied the VP to the disentangled representation learning of speech into its text, speaker, and style components. Results on objective evaluation metrics showed that the addition of the VP generally improves the training performance, as much as 18% relative to the baseline systems.
1.1 RELATED WORK
There are several general-purpose variance reduction techniques in Monte Carlo stochastic sampling, with the most popular approaches being antithetic sampling or more broadly coupling methods, control of variates and importance sampling (Robert & Casella, 2005; Glasserman, 2004; Srinivasan, 2013). These methods have not been explicitly applied for the variational divergence estimation problem. We speculate that either they are not applicable due to the unavailability of analytical probability density formulas or they are inefficient (e.g., the control of variates approach requires a second estimator and potentially a second parametric model in order to be applied).
Another way to reduce the variance is to restrict the function space to more smooth and/or controlled test (or critic) functions, balancing again between bias and variance. For instance, the restriction to Lipschitz continuous functions has the potential to reduce the variance since there exist favorable concentration inequality results for the Lipschitz space (Wainwright, 2019). In the GAN literature, Wasserstein GAN (Gulrajani et al., 2017) and spectral normalization (Miyato et al., 2018) impose Lipschitz continuity which resulted in signigicant gains in terms of training stability. Similarly, the restriction of test functions to an appropriately designed reproducing kernel Hilbert space could reduce the variance (Sreekar et al., 2020). Such approaches can be combined with our proposed variance penalties, as our formulation allows for general test-function spaces. However, we do not focus on this point here.
Given the importance of MI, several estimators aim towards improved statistical properties. Lower bounds such as MINE (Belghazi et al., 2018), which uses the DV variational formula with an expo-
nential moving average, NWJ estimator (Nguyen et al., 2010) and BA estimator (Barber & Agakov, 2003) as well as upper bounds such as CLUB (Cheng et al., 2020) still have high variance. InfoNCE (van den Oord et al., 2018) is one of the few MI estimators that has low variance, but at the cost of either high bias or high computational cost due to the need for many negative samples and thus large batch size. Poole et al. (2019) and Guo et al. (2021) aim to clarify the relationships and trade-offs between those variational bounds. A different approach to reducing variance is by appropriately working on the gradients of the objective function (Wen et al., 2020; 2021).
Finally, we discuss the approach of truncating the test function inside a bounded region as proposed in (Song & Ermon, 2019). The determination of the truncation threshold is quite difficult since it requires an a priori understanding of the log-likelihood ratio. Moreover, a high truncation threshold will not affect the estimation since a high threshold implies no real benefit in terms of variance reduction. On the other hand, a low threshold will result in large bias. Overall, using a high truncation threshold in order to avoid extreme values is a good practice even though it will have a limited impact on variance reduction.
2 BACKGROUND ON VARIATIONAL FORMULAS FOR RÉNYI AND f -DIVERGENCES.
While our variance reduction method can be applied to any divergence that possesses a variational formula, here our focus will be on the Rényi and f -divergences, including the KL divergence. For Rényi divergences an appropriate objective functional can be constructed from a difference of cumulant generating functions (Birrell et al., 2020b)
Rα(Q‖P ) = sup g∈Mb(Ω)
{ 1
α− 1 logEQ[e(α−1)g]−
1 α logEP [eαg]
} , α 6= 0, 1 . (1)
Here Q and P are probability distributions on the set Ω, EQ and EP denote the expectations with respect to Q and P respectively, andMb(Ω) is the space of bounded measurable real-valued functions on Ω. For f divergences, f being a lower semicontinuous convex function with f(1) = 0, one has the well-known Legendre transform variational formula (Broniatowski & Keziou, 2006; Nguyen et al., 2010)
Df (Q‖P ) = sup g∈Mb(Ω) {EQ[g]− EP [f∗(g)]} , (2)
where f∗(y) = supx∈R{yx − f(x)} is the Legendre transform of f . Here and in the following, the function of g that is being optimized will be called the objective functional. Equation (2) can be generalized to the (f,Γ)-divergences (Birrell et al., 2020a), where Γ ⊂ Mb(Ω) is a restricted test-function space
DΓf (Q‖P ) = sup g∈Γ {EQ[g]− ΛPf [g]} , (3)
ΛPf [g] = inf ν∈R {ν + EP [f∗(g − ν)]} . (4)
In particular, if fKL(x) = x log(x) corresponds to the KL divergence then
ΛPfKL [g] = log(EP [exp(g)]) ≡ Λ P [g] (5)
is the classical cumulant generating function and equation (3) (with Γ = Mb(Ω)) becomes the Donsker-Varadhan variational formula (Dupuis & Ellis., 1997, Appendix C.2)
DKL(Q‖P ) = sup g∈Mb(Ω) {EQ[g]− logEP [eg]} . (6)
For general f , we will often write equation (3) as
DΓf (Q‖P ) = sup g∈Γ,ν∈R {EQ[g − ν]− EP [f∗(g − ν)]} (7)
and if Γ is closed under the shifts g 7→ g − ν, ν ∈ R then we can write it simply as DΓf (Q‖P ) = sup
g∈Γ {EQ[g]− EP [f∗(g)]} . (8)
In particular, if Γ = Mb(Ω) then DΓf = Df . The generalizations of Rényi and KL divergence obtained by using a restricted space Γ in place of Mb(Ω) in equation (1) or equation (6) will be denoted by RΓα and D Γ KL, respectively.
3 STATISTICAL ESTIMATORS AND VARIANCE REDUCTION
Variational representations of divergences are especially useful for creating statistical estimators in a data-driven setting; a naive estimator is obtained by simply replacing expectations with the corresponding statistical averages in any of the equations (1), (2), (3), etc. More formally, the naive estimators can be written as DΓf (Qn‖Pn), RΓα(Qn‖Pn), etc., where Γ is some parameterized space of functions (e.g., a neural network), Qn and Pn are the n-sample empirical measures from Q and P respectively (i.e., EPn [g] = 1n ∑n j=1 g(Xj) where Xj are i.i.d. samples from P and similarly for EQn ; we also assume that the samples from Q and P are independent of one another), and the divergences are expressed in terms of the variational formulas from Section 2. However, in practice these naive methods often suffer from high variance (Song & Ermon, 2019; Birrell et al., 2020b). We address this via variance-penalized divergences, which are constructed by introducing a variance penalty into the objective functional of the variational representation, e.g.,
Dλf (Q‖P ) ≡ sup g∈Mb(Ω) {EQ[g]− EP [f∗(g)]− λV [g;Q,P ]} , (9)
where the variance penalty, λV , is proportional to the variance of EQn [g]−EPn [f∗(g)] with strength λ > 0. Using this, we construct the following divergence estimator
sup η {EQn [gη]− EPn [f∗(gη)]− λV [gη;Qn, Pn]} , (10)
where gη is a neural network with parameters η. Similar variance penalties can be derived to other divergences with variational representations.
3.1 VARIANCE PENALTY
In this subsection we provide details on the variance penalty for (f,Γ)-divergences, the KLdivergence, and Rényi divergences. The same framework can be repeated to other divergences with a variational representation.
To introduce the variance penalty, first consider the (f,Γ)-divergence representation equation (8). Our goal is to penalize g’s for which EQn [g] or EPn [f∗(g)] have large variance, hence we introduce a penalty term proportional to (VarQ denotes variance with respect to Q, etc.)
Var [EQn [g] + EPn [f∗(g)]] = 1
n (VarQ[g] + VarP [f∗(g)]) . (11)
Specifically, for λ > 0 we define the variance-penalized (f,Γ)-divergence
DΓ,λf (Q‖P ) ≡ sup g∈Γ,ν∈R {EQ[g − ν]− EP [f∗(g − ν)]− λ(VarQ[g − ν] + VarP [f∗(g − ν)])} .
(12)
As noted above, if Γ is invariant under constant shifts then the optimization over ν can be omitted. A similar result to equation (12) can be derived for any objective functional that is a linear combination of expectations, e.g., integral probability metrics (Müller, 1997; Sriperumbudur et al., 2012) such as the Wasserstein metric.
For nonlinear objective functional terms of the generic form G(EP [h(g)]), such as appear in equation (1) and equation (6), we cannot compute the variance of the corresponding statistical estimator at finite n but we can use the delta method to obtain the asymptotic variance
lim n→∞
nVar [G(EPn [h(g)])] = (G′(EP [h(g)]))2VarP [h(g)] . (13)
Thus, we propose for the nonlinear case to use the above asymptotic variance as a penalty and obtain the following variance-penalized KL and Rényi divergence variational formulas:
DΓ,λKL (Q‖P ) ≡ sup g∈Γ
{ EQ[g]− logEP [eg]− λ ( VarQ[g] + VarP [eg]/(EP [eg])2 )} , (14)
RΓ,λα (Q‖P ) ≡ sup g∈Γ
{ 1
α− 1 logEQ[e(α−1)g]−
1 α logEP [eαg] (15)
− λ (
1 (α− 1)2 VarQ[e(α−1)g] (EQ[e(α−1)g])2 + 1 α2 VarP [eαg] (EP [eαg])2
)} .
Remark 1. Both equation (11) and equation (13) suggest that the statistical estimators for the above penalized divergences should use a variance penalty strength that decays with the sample size λ = λ0/n, though other forms of n-dependence may be useful in practice.
Though the variance penalty introduces bias, as λ → 0 the penalized divergence converges to the corresponding non-penalized divergence, as made precise by the following theorem. Theorem 2. Let Γ ⊂Mb(Ω). We have the following convergence results:
lim λ→0+
DΓ,λKL (Q‖P ) = D Γ KL(Q‖P ) , (16)
lim λ→0+
RΓ,λα (Q‖P ) = RΓα(Q‖P ) , (17)
and if f∗(y) <∞ for all y ∈ R then
lim λ→0+
DΓ,λf (Q‖P ) = D Γ f (Q‖P ) . (18)
Moreover, under fairly general assumptions it holds that
lim λ→∞ DΓ,λKL (Q‖P ) = lim λ→∞ RΓ,λα (Q‖P ) = lim λ→∞ DΓ,λf (Q‖P ) = 0 . (19)
Remark 3. Note that the corresponding statistical estimators, DΓ,λf (Qn‖Pn), etc., have additional bias due to the supremum over g. We present partial results on bias bounds in Appendix D.
The proof of Theorem 2 is given in Appendix B for the zero limit and Theorem 9 for the infinity limit. The same proof techniques can be applied to other divergences with a variational characterization.
Finally, for non-zero λ the penalized divergences (12), (14), (15) retain the divergence property and are therefore appropriate for quantifying the “distance” between probability distributions: Theorem 4. Under fairly general assumptions on f and Γ (see Appendix B for details) and letting DΓ,λ denote any of DΓ,λf , D Γ,λ KL , or R Γ,λ α we have D
Γ,λ(Q‖P ) ≥ 0 and DΓ,λ(Q‖P ) = 0 if and only if Q = P .
The proof of Theorem 4 can be found in Appendix B.
3.2 VARIANCE-REDUCED DIVERGENCE ESTIMATION ALGORITHM
We now propose the following divergence neural estimation (DNE) methods with variance penalty, generalizing equations (9)-(10). (DNE-VPλ) sup
η {H[gη;Qn, Pn]− λV [gη;Qn, Pn]} . (20)
We will compare the above method to the non-penalized estimator (i.e., with λ = 0) (DNE) sup
η H[gη;Qn, Pn] . (21)
In the above, the test function space is a neural network Γ = {gη, η ∈ E} with parameters η and H denotes the objective functional of the divergence, e.g., for the Rényi divergences (1)
Hα[g;Q,P ] = 1
α− 1 logEQ[e(α−1)g]−
1 α logEP [eαg] , α 6= 0, 1 (22)
and for f divergences (2) Hf [g;Q,P ] = EQ[g]− EP [f∗(g)] . (23)
Finally, V is the variance penalty corresponding to the chosen divergence (see Section 3.1), e.g., for Rényi divergences
Vα[g;Qn, Pn] = 1 (α− 1)2 VarQn [e(α−1)g] (EQn [e(α−1)g])2 + 1 α2 VarPn [eαg] (EPn [eαg])2 , α 6= 0, 1 (24)
and for f divergences Vf [g;Qn, Pn] = VarQn [g] + VarPn [f∗(g)] . (25)
We solve equation (20) via Adam algorithm (Kingma & Ba, 2014); a stochastic gradient descent method.
4 RESULTS ON SYNTHETIC DATASETS
Figure 1 presents the statistical estimation of Rényi divergence between two one-dimensional Gaussians which both have zero mean but different variance values. The order of Rényi divergence, α, controls how much weight to put on the tails of the distributions, thus it can become very sensitive to the few samples from the tails. The same conclusion can be deduced from the variational formula (i.e., equation (1) where α multiplies the exponentials’ argument). Therefore, a larger α value implies larger statistical variance. Indeed, high estimation variance is observed with DNE (upper leftmost panel of Figure 1) despite the fact that we applied truncation as proposed by Song & Ermon (2019) with truncation threshold set to 1. In contrast, the DNE-VPλ estimator with λ = 0.1 greatly reduces the statistical variance even when α is large (lower leftmost panel). For fairness, we imposed the same truncation operation in the output of DNE-VPλ. We report a 80% reduction of variance for α = 2 which becomes 99% for α = 10.
The proposed approach introduces an additional hyper-parameter, λ, which controls the strength of the VP. Our theory suggests that λ should depend on the sample size (and perhaps also on the other parameters), therefore we perform two sets of experiments. In the first experiment, we explore the range of optimal values for λ in terms of MedAE1. As is evident from the middle panels of Figure 1, λ-values in the vicinity of 0.1 are a reasonable compromise between variance and bias. In the second experiment, we demonstrate the performance in terms of MedAE as a function of the sample size, N . As suggested in Remark 1, monotone performance is obtained when λ is inversely proportional to N (blue dashed line in rightmost upper panel of Figure 1).
Our second synthetic example constitutes the estimation of MI using various approaches with and without VP. Here, we let Q be a zero-mean multivariate correlated Gaussian random vector of dimension d. We impose element-wise correlation, i.e., corr(xi, x d
2 +j ) = δi,jρ to the samples x ∼ Q
1Recall that MedAE stands for median absolute error and it is a more robust-to-outliers metric.
where i, j = 1, . . . , d2 and δi,j is Kronecker’s delta. With P we denote the product of the marginals, which in this case is simply a zero-mean standardized multivariate Gaussian. Figure 2 presents the estimated MI per training step. We consider the Renyi-based MI with α = 0.5 as well as the standard MI using the DV variational formula. Notice that these two variants result in different true values (black lines in Figure 2). The plotted results demonstrate the successful reduction of variance when VP is added to the objective functional. Interestingly, the extension of VP to InfoNCE and CLUB estimators (second row of panels in Figure 2) implies that our approach can be applied to any MI estimators, thus offering a general variance reduction framework. Bias, variance and MSE plots as well as several more experiments can be found in Appendix F.
5 REAL DATA APPLICATIONS
5.1 DETECTING RARE BIOLOGICAL SUB-POPULATIONS
Using the dataset from Levine et al. (2015), we test the efficacy of DNE-VPλ in discriminating cell populations which are contaminated with a rare sub-population with distinguishable statistical properties. Specifically, we consider single-cell mass cytometry measurements on 16 bone marrow protein markers2 (i.e., d = 16) coming from healthy and diseased individuals with acute myeloid leukemia. For each run we created three subsets of healthy samples with sample size N = 20K which we denote by P and one dataset as a mixture of 99% healthy and 1% diseased samples which is denoted by Q. Notice that the actual number of diseased samples is only 200 thus it is considered as a rare sub-population.
For Rényi divergence with α = 0.5 (left panels in Figure 3), both DNE and DNE-VPλ are stable. Despite the improvement in the separation of the two histograms, the observed variance reduction
2Data was accessed from https://community.cytobank.org/cytobank/experiments/ 46098/illustrations/121588
of DNE-VPλ is minimal and not enough to discriminate between the healthy and the contaminated with 1% diseased samples distributions. When considering Rényi divergence with α = 1.1, we observe that DNE fails to produce stable estimates. In contrast, DNE-VPλ always computes stable estimates. Additionally, the two histograms are satisfactorily separated, implying that larger values of α are crucial, provided there is a way to handle the statistical variance. For completeness, Table 1 reports the first and second order statistics of the histograms shown in Figure 3.
5.2 DISENTANGLED REPRESENTATION LEARNING IN SPEECH SYNTHESIS
An important application of MI is disentangled representation learning. In the context of representation disentanglement, the extraction of meaningful latent features for high-dimensional data is challenging, especially when explicit knowledge needs to be distilled into interpretable representations. One popular approach to enforce representation disentanglement is via MI minimization. Moreover, a superior disentanglement will allow a greater degree of interpretability and controllability, especially for generative models maintaining high production capacity. In this section, we employ the proposed DNE-VPλ estimator for MI estimation in order to learn disentangled representation, and, particularly, in the context of speech synthesis and analysis.
A universal text-to-speech synthesizer can generate speech from text with speaker factor and speaking style similar to a reference signal. Previous works aimed to encode the information from reference speech into a fixed-length style and speaker embedding using trainable encoders (Wang et al., 2018; Tjandra et al., 2020; Chien et al., 2021; Tan et al., 2021). The major challenges for such speech synthesizers are controllability and generalisability, especially when trying to generalize the models with multiple speakers and multiple styles. During training, content information is leaked into the style embeddings (“content leakage”) and speaker information into style embeddings (“style leakage”). Thus at inference, when the reference speech has different content from the input text, the decoder expects the content from the style vector ignoring some part of the content text. Moreover, speaker information could be expected from the style encoder leading to completely different speaker attribute. To alleviate that, Paul et al. (2021) suggested replacing the KL-based MI with Rényi-based MI and minimizing the Rényi divergence between the joint distribution and the product of marginals for the content-style and style-speaker pairs. However, reliable estimation of Rényi divergence was problematic due to high statistical variance. Taking advantage of the proposed variance reduction technique, we employ a VP term in the loss function which is denoted as DNE-VPλ (Rα). By doing so, content, style, and speaker spaces become representative and (ideally) independent of each other. We introduce two variations of this framework: sum of three Rényi divergences DNE(Rα=0 + Rα=0.5 + Rα=1) (i.e., sum of the corresponding objective functionals) and DNE(Rα=0.5). We tested several different λ values, aiming to reduce the statistical variance of the adversarial component. Notice that larger λ values were helpful in this application.
We evaluate the performance of disentanglement strategies using three performance scores from 100 random samples shown in Table 2. Mel-cepstral distortion (MCD) measures the spectral distance between the synthesized and reference mel-spectrum features. Root mean squared error (RMSE) evaluates the similarity in F0 modeling between reference and synthesized speech. Lastly, the content preservation criterion is evaluated by word error rate (WER). During inference, we evaluate the performance on two conditions: ‘no shuffle’ and ‘shuffle’. During inference, ‘no shuffle’ feeds the same reference speech into style and speaker encoders and its corresponding text to predict the speech features, whereas ‘shuffle’ feeds random speech. We observe that the proposed DNE-VPλ variants outperform baseline approaches without VP in terms of all evaluation metrics. Our proposed systems greatly reduced content leakage by improving the word error rate by approximately 5-18% relative to the baseline systems. Furthermore, RMSE-F0 and MCD scores show that the disentanglement module during training assists the TTS to achieve more accurate rendering of prosodic patterns as well as synthesizing proper speech content to its corresponding text without any significant leakage issues. | 1. What is the focus and contribution of the paper regarding statistical divergence estimation?
2. What are the strengths of the proposed approach, particularly in terms of variance reduction?
3. What are the weaknesses of the paper, especially regarding comparisons with other methods?
4. Do you have any concerns about the proposed method, such as alleviating the bias issue?
5. What are some minor points or linguistic errors in the review? | Summary Of The Paper
Review | Summary Of The Paper
This paper takes the variational view of the statistical estimation of statistical divergences. Standard variational formulas suffer from high variance when estimated with finite samples, this paper proposes to address the problem by adding a variance penalty regularisation term to the objective functional, thus trading variance for bias. The bias-variance trade-off is controlled via the regularisation coefficient. From this new formulation, the paper derives a new low-variance Neural Divergence Estimation Algorithm with the variance regularisation technique.
Review
Strengths:
The paper clearly motivates the problem of high-variance in the estimation of the statistical divergences . Despite being heavy on notation, the manuscript is well-written and easy to follow, and the proposed solution is clearly laid-out.
The description of the method seemed mostly correct to me, although I did not verify the mathematics in detail. The theoretical results are also a useful addition.
The experimental section assesses some of the properties of the proposed estimator sufficiently.
Weaknesses:
One of my concerns is that this paper does not compare to other variance reduction methods. The paper mentions that there are some other methods in the literature that apply techniques such as antithetic sampling to reduce the variance, but I did not see a direct comparison to such methods. How would they fair against the variance penalty method? Can the VP be used to supplement them?
Regarding the bias issue of the proposed estimator, would decaying the regularisation coefficient during the optimisation process help with alleviating this issue? If yes, I think this point can be explored in the experiments as a “best-of-both-worlds” approach.
I spotted some linguistics errors, e.g. "such as appear in equation (1) and equation (8)” and “there we also prove that”.
Minor point: Missing error bars on MedAE in Figure (1). |
ICLR | Title
Learning Python Code Suggestion with a Sparse Pointer Network
Abstract
To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very longrange dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past.
1 INTRODUCTION
Integrated development environments (IDEs) are essential tools for programmers. Especially when a developer is new to a codebase, one of their most useful features is code suggestion: given a piece of code as context, suggest a likely sequence of next tokens. Typically, the IDE suggests an identifier or a function call, including API calls. While extensive support exists for statically-typed languages such as Java, code suggestion for dynamic languages like Python is harder and less well supported because of the lack of type annotations. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code.
Recently, methods from statistical natural language processing (NLP) have been used to train code suggestion systems from code usage in large code repositories (Hindle et al., 2012; Allamanis & Sutton, 2013; Tu et al., 2014). To this end, usually an n-gram language model is trained to score possible completions. Neural language models for code suggestion (White et al., 2015; Das & Shah, 2015) have extended this line of work to capture more long-range dependencies. Yet, these standard neural language models are limited by the so-called hidden state bottleneck, i.e., all context information has to be stored in a fixed-dimensional internal vector representation. This limitation restricts such models to local phenomena and does not capture very long-range semantic relationships like suggesting calling a function that has been defined many tokens before.
To address these issues, we create a large corpus of 41M lines of Python code by using a heuristic for crawling high-quality code repositories from GitHub. We investigate, for the first time, the use of attention (Bahdanau et al., 2014) for code suggestion and find that, despite a substantial improvement
in accuracy, it still makes avoidable mistakes. Hence, we introduce a model that leverages long-range Python dependencies by selectively attending over the introduction of identifiers as determined by examining the Abstract Syntax Tree. The model is a form of pointer network (Vinyals et al., 2015a), and learns to dynamically choose between syntax-aware pointing for modeling long-range dependencies and free form generation to deal with local phenomena, based on the current context.
Our contributions are threefold: (i) We release a code suggestion corpus of 41M lines of Python code crawled from GitHub, (ii) We introduce a sparse attention mechanism that captures very long-range dependencies for code suggestion of this dynamic programming language efficiently, and (iii) We provide a qualitative analysis demonstrating that this model is indeed able to learn such long-range dependencies.
2 METHODS
We first revisit neural language models, before briefly describing how to extend such a language model with an attention mechanism. Then we introduce a sparse attention mechanism for a pointer network that can exploit the Python abstract syntax tree of the current context for code suggestion.
2.1 NEURAL LANGUAGE MODEL
Code suggestion can be approached by a language model that measures the probability of observing a sequence of tokens in a Python program. For example, for the sequence S = a1, . . . , aN , the joint probability of S factorizes according to
Pθ(S) = Pθ(a1) · N∏ t=2 Pθ(at | at−1, . . . , a1) (1)
where the parameters θ are estimated from a training corpus. Given a sequence of Python tokens, we seek to predict the next M tokens at+1, . . . , at+M that maximize Equation 1
argmax at+1, ..., at+M Pθ(a1, . . . , at, at+1, . . . , at+M ). (2)
In this work, we build upon neural language models using Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM, Hochreiter & Schmidhuber, 1997). This neural language model estimates the probabilities in Equation 1 using the output vector of an LSTM at time step t (denoted ht here) according to
Pθ(at = τ | at−1, . . . , a1) = exp (vTτ ht + bτ )∑ τ ′ exp (v T τ ′ht + bτ ′)
(3)
where vτ is a parameter vector associated with token τ in the vocabulary.
Neural language models can, in theory, capture long-term dependencies in token sequences through their internal memory. However, as this internal memory has fixed dimension and can be updated at every time step, such models often only capture local phenomena. In contrast, we are interested in very long-range dependencies like referring to a function identifier introduced many tokens in the past. For example, a function identifier may be introduced at the top of a file and only used near the bottom. In the following, we investigate various external memory architectures for neural code suggestion.
2.2 ATTENTION
A straight-forward approach to capturing long-range dependencies is to use a neural attention mechanism (Bahdanau et al., 2014) on the previous K output vectors of the language model. Attention mechanisms have been successfully applied to sequence-to-sequence tasks such as machine translation (Bahdanau et al., 2014), question-answering (Hermann et al., 2015), syntactic parsing (Vinyals et al., 2015b), as well as dual-sequence modeling like recognizing textual entailment (Rocktäschel et al., 2016). The idea is to overcome the hidden-state bottleneck by allowing referral back to previous output vectors. Recently, these mechanisms were applied to language modelling by Cheng et al. (2016) and Tran et al. (2016).
Formally, an attention mechanism with a fixed memory Mt ∈ Rk×K of K vectors mi ∈ Rk for i ∈ [1,K], produces an attention distribution αt ∈ RK and context vector ct ∈ Rk at each time step t according to Equations 4 to 7. Furthermore, WM ,W h ∈ Rk×k and w ∈ Rk are trainable parameters. Finally, note that 1K represents a K-dimensional vector of ones.
Mt = [m1 . . . mK ] ∈ Rk×K (4) Gt = tanh(WMMt + 1TK(W
hht)) ∈ Rk×K (5) αt = softmax(wTGt) ∈ R1×K (6) ct =Mtα T t ∈ Rk (7)
For language modeling, we populate Mt with a fixed window of the previous K LSTM output vectors. To obtain a distribution over the next token we combine the context vector ct of the attention mechanism with the output vector ht of the LSTM using a trainable projection matrixWA ∈ Rk×2k. The resulting final output vector nt ∈ Rk encodes the next-word distribution and is projected to the size of the vocabulary |V |. Subsequently, we apply a softmax to arrive at a probability distribution yt ∈ R|V | over the next token. This process is presented in Equation 9 where WV ∈ R|V |×k and bV ∈ R|V | are trainable parameters.
nt = tanh ( WA [ ht ct ]) ∈ Rk (8)
yt = softmax(W V nt + b V ) ∈ R|V | (9)
The problem of the attention mechanism above is that it quickly becomes computationally expensive for large K. Moreover, attending over many memories can make training hard as a lot of noise is introduced in early stages of optimization where the LSTM outputs (and thus the memoryMt) are more or less random. To alleviate these problems we now turn to pointer networks and a simple heuristic for populating Mt that permits the efficient retrieval of identifiers in a large history of Python code.
2.3 SPARSE POINTER NETWORK
We develop an attention mechanism that provides a filtered view of a large history of Python tokens. At any given time step, the memory consists of context representations of the previous K identifiers introduced in the history. This allows us to model long-range dependencies found in identifier usage. For instance, a class identifier may be declared hundreds of lines of code before it is used. Given a history of Python tokens, we obtain a next-word distribution from a weighed average of the sparse pointer network for identifier reference and a standard neural language model. The weighting of the two is determined by a controller.
Formally, at time-step t, the sparse pointer network operates on a memoryMt ∈ Rk×K of only the K previous identifier representations (e.g. function identifiers, class identifiers and so on). In addition, we maintain a vectormt = [id1, . . . , idK ] ∈ NK of symbol ids for these identifier representations (i.e. pointers into the large global vocabulary).
As before, we calculate a context vector ct using the attention mechanism (Equation 7), but on a memoryMt only containing representations of identifiers that were declared in the history. Next, we obtain a pseudo-sparse distribution over the global vocabulary from
st[i] = { αt[j] ifmt[j] = i −C otherwise (10)
it = softmax(st) ∈ R|V | (11)
where−C is a large negative constant (e.g. −1000). In addition, we calculate a next-word distribution from a standard neural language model
yt = softmax(W V ht + bV ) ∈ R|V | (12)
and we use a controller to calculate a distribution λt ∈ R2 over the language model and pointer network for the final weighted next-word distribution y∗t via
hλt = [ ht xt ct ] ∈ R3k (13)
λt = softmax(W λhλt + b λ) ∈ R2 (14) y∗t = [yt it]λt ∈ R|V | (15)
Here, xt is the representation of the input token, and W λ ∈ R2×3k and bλ ∈ R2 a trainable weight matrix and bias respectively. This controller is conditioned on the input, output and context representations. This means for deciding whether to refer to an identifier or generate from the global vocabulary, the controller has access to information from the encoded next-word distribution ht of the standard neural language model, as well as the attention-weighted identifier representations ct from the current history.
Figure 1 overviews this process. In it, the identifier base_path appears twice, once as an argument to a function and once as a member of a class (denoted by *). Each appearance has a different id in the vocabulary and obtains a different probability from the model. In the example, the model correctly chooses to refer to the member of the class instead of the out-of-scope function argument, although, from a user point-of-view, the suggestion would be the same in both cases.
3 LARGE-SCALE PYTHON CORPUS
Previous work on code suggestion either focused on statically-typed languages (particularly Java) or trained on very small corpora. Thus, we decided to collect a new large-scale corpus of the dynamic programming language Python. According to the programming language popularity website Pypl (Carbonnelle, 2016), Python is the second most popular language after Java. It is also the 3rd most common language in terms of number of repositories on the open-source code repository GitHub, after JavaScript and Java (Zapponi, 2016).
We collected a corpus of 41M lines of Python code from GitHub projects. Ideally, we would like this corpus to only contain high-quality Python code, as our language model learns to suggest code from how users write code. However, it is difficult to automatically assess what constitutes high-quality code. Thus, we resort to the heuristic that popular code projects tend to be of good quality, There are
two metrics on GitHub that we can use for this purpose, namely stars (similar to bookmarks) and forks (copies of a repository that allow users to freely experiment with changes without affecting the original repository). Similar to Allamanis & Sutton (2013) and Allamanis et al. (2014), we select Python projects with more than 100 stars, sort by the number of forks descending, and take the top 1000 projects. We then removed projects that did not compile with Python3, leaving us with 949 projects. We split the corpus on the project level into train, dev, and test. Table 1 presents the corpus statistics.
3.1 NORMALIZATION OF IDENTIFIERS
Unsurprisingly, the long tail of words in the vocabulary consists of rare identifiers. To improve generalization, we normalize identifiers before feeding the resulting token stream to our models. That is, we replace every identifier name with an anonymous identifier indicating the identifier group (class, variable, argument, attribute or function) concatenated with a random number that makes the identifier unique in its scope. Note that we only replace novel identifiers defined within a file. Identifier references to external APIs and libraries are left untouched. Consistent with previous corpus creation for code suggestion (e.g. Khanh Dam et al., 2016; White et al., 2015), we replace numerical constant tokens with $NUM$, remove comments, reformat the code, and replace tokens appearing less than five times with an $OOV$ (out of vocabulary) token.
4 EXPERIMENTS
Although previous work by White et al. (2015) already established that a simple neural language model outperforms an n-gram model for code suggestion, we include a number of n-gram baselines to confirm this observation. Specifically, we use n-gram models for n ∈ {3, 4, 5, 6} with Modified Kneser-Ney smoothing (Kneser & Ney, 1995) from the Kyoto Language Modelling Toolkit (Neubig, 2012).
We train the sparse pointer network using mini-batch SGD with a batch size of 30 and truncated backpropagation through time (Werbos, 1990) with a history of 20 identifier representations. We use
an initial learning rate of 0.7 and decay it by 0.9 after every epoch. As additional baselines, we test a neural language model with LSTM units with and without attention. For the attention language models, we experiment with a fixed-window attention memory of the previous 20 and 50 tokens respectively, and a batch size of 75. We found during testing that the baseline models performed worse with the same batch size as the sparse pointer network of 30. We therefore chose to report the stronger results obtained with a batch size of 75.
All neural language models were developed in TensorFlow (Abadi et al., 2016) and trained using cross-entropy loss. While processing a Python source code file, the last recurrent state of the RNN is fed as the initial state of the subsequent sequence of the same file and reset between files. All models use an input and hidden size of 200, an LSTM forget gate bias of 1 (Jozefowicz et al., 2015), gradient norm clipping of 5 (Pascanu et al., 2013), and randomly initialized parameters in the interval (−0.05, 0.05). As regularizer, we use a dropout of 0.1 on the input representations. Furthermore, we use a sampled softmax (Jean et al., 2015) with a log-uniform sampling distribution and a sample size of 1000.
5 RESULTS
We evaluate all models using perplexity (PP), as well as accuracy of the top prediction (Acc) and the top five predictions (Acc@5). The results are summarized in Table 2.
We can confirm that for code suggestion neural models outperform n-gram language models by a large margin. Furthermore, adding attention improves the results substantially (2.3 lower perplexity and 3.4 percentage points increased accuracy). Interestingly, this increase can be attributed to a superior prediction of identifiers, which increased from an accuracy of 2.1% to 21.4%. An LSTM with an attention window of 50 gives us the best accuracy for the top prediction. We achieve further improvements for perplexity and accuracy of the top five predictions by using a sparse pointer network that uses a smaller memory of the past 20 identifier representations.
5.1 QUALITATIVE ANALYSIS
Figures 3a-d show a code suggestion example involving an identifier usage. While the LSTM baseline is uncertain about the next token, we get a sensible prediction by using attention or the sparse pointer network. The sparse pointer network provides more reasonable alternative suggestions beyond the correct top suggestion.
Figures 3e-h show the use-case referring to a class attribute declared 67 tokens in the past. Only the Sparse Pointer Network makes a good suggestion. Furthermore, the attention weights in 3i demonstrate that this model distinguished attributes from other groups of identifiers. We give a full example of a token-by-token suggestion of the Sparse Pointer Network in Figure 4 in the Appendix.
(a) Code snippet for referencing variable.
(b) LSTM Model. (c) LSTM w/ Attention 50. (d) Sparse Pointer Network.
6 RELATED WORK
Previous code suggestion work using methods from statistical NLP has mostly focused on n-gram models. Much of this work is inspired by Hindle et al. (2012) who argued that real programs fall into a much smaller space than the flexibility of programming languages allows. They were able to capture the repetitiveness and predictable statistical properties of real programs using language models. Subsequently, Tu et al. (2014) improved upon Hindle et al.’s work by adding a cache mechanism that allowed them to exploit locality stemming from the specialisation and decoupling of program modules. Tu et al.’s idea of adding a cache mechanism to the language model is specifically designed to exploit the properties of source code, and thus follows the same aim as the sparse attention mechanism introduced in this paper.
While the majority of preceding work trained on small corpora, Allamanis & Sutton (2013) created a corpus of 352M lines of Java code which they analysed with n-gram language models. The size of the corpus allowed them to train a single language model that was effective across multiple different project domains. White et al. (2015) later demonstrated that neural language models outperform n-gram models for code suggestion. They compared various n-gram models (up to nine grams), including Tu et al.’s cache model, with a basic RNN neural language model. Khanh Dam et al. (2016) compared White et al.’s basic RNN with LSTMs and found that the latter are better at code suggestion due to their improved ability to learn long-range dependencies found in source code. Our paper extends this line of work by introducing a sparse attention model that captures even longer dependencies.
The combination of lagged attention mechanisms with language modelling is inspired by Cheng et al. (2016) who equipped LSTM cells with a fixed-length memory tape rather than a single memory cell. They achieved promising results on the standard Penn Treebank benchmark corpus (Marcus et al., 1993). Similarly, Tran et al. added a memory block to LSTMs for language modelling of English, German and Italian and outperformed both n-gram and neural language models. Their memory encompasses representations of all possible words in the vocabulary rather than providing a sparse view as we do. Attention mechanisms were previously applied to the study of source code by Allamanis et al. who used a convolutional neural network combined with an attention mechanism to generate method names from bodies.
An alternative to our purely lexical approach to code suggestion involves the use of probabilistic context-free grammars (PCFGs) which exploit the formal grammar specifications and well-defined, deterministic parsers available for source code. These were used by Allamanis & Sutton (2014) to extract idiomatic patterns from source code. A weakness of PCFGs is their inability to model context-dependent rules of programming languages such as that variables need to be declared before being used. Maddison & Tarlow (2014) added context-aware variables to their PCFG model in order to capture such rules.
Ling et al. (2016) recently used a pointer network to generate code from natural language descriptions. Our use of a controller for deciding whether to generate from a language model or copy an identifier using a sparse pointer network is inspired by their latent code predictor. However, their inputs (textual descriptions) are short whereas code suggestion requires capturing very long-range dependencies that we addressed by a filtered view on the memory of previous identifier representations.
7 CONCLUSIONS AND FUTURE WORK
In this paper, we investigated neural language models for code suggestion of the dynamically-typed programming language Python. We released a corpus of 41M lines of Python crawled from GitHub and compared n-gram, standard neural language models, and attention. By using attention, we observed an order of magnitude more accurate prediction of identifiers. Furthermore, we proposed a sparse pointer network that can efficiently capture long-range dependencies by only operating on a filtered view of a memory of previous identifier representations. This model achieves the lowest perplexity and best accuracy among the top five predictions. The Python corpus and the code for our models is released at https://github.com/uclmr/pycodesuggest.
The presented methods were only tested for code suggestion within the same Python file. We are interested in scaling the approach to the level of entire code projects and collections thereof, as well as integrating a trained code suggestion model into an existing IDE. Furthermore, we plan to work on code completion, i.e., models that provide a likely continuation of a partial token, using character language models (Graves, 2013).
ACKNOWLEDGMENTS
This work was supported by Microsoft Research through its PhD Scholarship Programme, an Allen Distinguished Investigator Award, and a Marie Curie Career Integration Award. | 1. What are the key contributions of the paper in improving neural language models?
2. How does the proposed approach differ from traditional attention mechanisms?
3. What is the significance of the Python codebase experiments in evaluating the model's performance?
4. Are there any concerns or limitations regarding the model's efficiency and scalability?
5. What further investigations or ablation studies could help better understand the effectiveness of the proposed approach? | Review | Review
This paper presents an improved neural language models designed for selected long-term dependency, i.e., to predict more accurately the next identifier for the dynamic programming language such as Python. The improvements are obtained by:
1) replacing the fixed-widow attention with a pointer network, in which the memory only consists of context representation of the previous K identifies introduced for the entire history.
2) a conventional neural LSTM-based language model is combined with such a sparse pointer network with a controller, which linearly combines the prediction of both components using a dynamic weights, decided by the input, hidden state, and the context representations at the time stamp.
Such a model avoids the the need of large window size of the attention to predict next identifier, which usually requires a long-term dependency in the programming language. This is partly validated by the python codebase (which is another contribution of this paper) experiments in the paper.
While the paper still misses some critical information that I would like to see, including how the sparse pointer network performance chances with different size of K, and how computationally efficient it is for both training and inference time compared to LSTM w/ attention of various window size, and ablation experiments about how much (1) and (2) contribute respectively, it might be of interest to the ICLR community to see it accepted. |
ICLR | Title
Learning Python Code Suggestion with a Sparse Pointer Network
Abstract
To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very longrange dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past.
1 INTRODUCTION
Integrated development environments (IDEs) are essential tools for programmers. Especially when a developer is new to a codebase, one of their most useful features is code suggestion: given a piece of code as context, suggest a likely sequence of next tokens. Typically, the IDE suggests an identifier or a function call, including API calls. While extensive support exists for statically-typed languages such as Java, code suggestion for dynamic languages like Python is harder and less well supported because of the lack of type annotations. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code.
Recently, methods from statistical natural language processing (NLP) have been used to train code suggestion systems from code usage in large code repositories (Hindle et al., 2012; Allamanis & Sutton, 2013; Tu et al., 2014). To this end, usually an n-gram language model is trained to score possible completions. Neural language models for code suggestion (White et al., 2015; Das & Shah, 2015) have extended this line of work to capture more long-range dependencies. Yet, these standard neural language models are limited by the so-called hidden state bottleneck, i.e., all context information has to be stored in a fixed-dimensional internal vector representation. This limitation restricts such models to local phenomena and does not capture very long-range semantic relationships like suggesting calling a function that has been defined many tokens before.
To address these issues, we create a large corpus of 41M lines of Python code by using a heuristic for crawling high-quality code repositories from GitHub. We investigate, for the first time, the use of attention (Bahdanau et al., 2014) for code suggestion and find that, despite a substantial improvement
in accuracy, it still makes avoidable mistakes. Hence, we introduce a model that leverages long-range Python dependencies by selectively attending over the introduction of identifiers as determined by examining the Abstract Syntax Tree. The model is a form of pointer network (Vinyals et al., 2015a), and learns to dynamically choose between syntax-aware pointing for modeling long-range dependencies and free form generation to deal with local phenomena, based on the current context.
Our contributions are threefold: (i) We release a code suggestion corpus of 41M lines of Python code crawled from GitHub, (ii) We introduce a sparse attention mechanism that captures very long-range dependencies for code suggestion of this dynamic programming language efficiently, and (iii) We provide a qualitative analysis demonstrating that this model is indeed able to learn such long-range dependencies.
2 METHODS
We first revisit neural language models, before briefly describing how to extend such a language model with an attention mechanism. Then we introduce a sparse attention mechanism for a pointer network that can exploit the Python abstract syntax tree of the current context for code suggestion.
2.1 NEURAL LANGUAGE MODEL
Code suggestion can be approached by a language model that measures the probability of observing a sequence of tokens in a Python program. For example, for the sequence S = a1, . . . , aN , the joint probability of S factorizes according to
Pθ(S) = Pθ(a1) · N∏ t=2 Pθ(at | at−1, . . . , a1) (1)
where the parameters θ are estimated from a training corpus. Given a sequence of Python tokens, we seek to predict the next M tokens at+1, . . . , at+M that maximize Equation 1
argmax at+1, ..., at+M Pθ(a1, . . . , at, at+1, . . . , at+M ). (2)
In this work, we build upon neural language models using Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM, Hochreiter & Schmidhuber, 1997). This neural language model estimates the probabilities in Equation 1 using the output vector of an LSTM at time step t (denoted ht here) according to
Pθ(at = τ | at−1, . . . , a1) = exp (vTτ ht + bτ )∑ τ ′ exp (v T τ ′ht + bτ ′)
(3)
where vτ is a parameter vector associated with token τ in the vocabulary.
Neural language models can, in theory, capture long-term dependencies in token sequences through their internal memory. However, as this internal memory has fixed dimension and can be updated at every time step, such models often only capture local phenomena. In contrast, we are interested in very long-range dependencies like referring to a function identifier introduced many tokens in the past. For example, a function identifier may be introduced at the top of a file and only used near the bottom. In the following, we investigate various external memory architectures for neural code suggestion.
2.2 ATTENTION
A straight-forward approach to capturing long-range dependencies is to use a neural attention mechanism (Bahdanau et al., 2014) on the previous K output vectors of the language model. Attention mechanisms have been successfully applied to sequence-to-sequence tasks such as machine translation (Bahdanau et al., 2014), question-answering (Hermann et al., 2015), syntactic parsing (Vinyals et al., 2015b), as well as dual-sequence modeling like recognizing textual entailment (Rocktäschel et al., 2016). The idea is to overcome the hidden-state bottleneck by allowing referral back to previous output vectors. Recently, these mechanisms were applied to language modelling by Cheng et al. (2016) and Tran et al. (2016).
Formally, an attention mechanism with a fixed memory Mt ∈ Rk×K of K vectors mi ∈ Rk for i ∈ [1,K], produces an attention distribution αt ∈ RK and context vector ct ∈ Rk at each time step t according to Equations 4 to 7. Furthermore, WM ,W h ∈ Rk×k and w ∈ Rk are trainable parameters. Finally, note that 1K represents a K-dimensional vector of ones.
Mt = [m1 . . . mK ] ∈ Rk×K (4) Gt = tanh(WMMt + 1TK(W
hht)) ∈ Rk×K (5) αt = softmax(wTGt) ∈ R1×K (6) ct =Mtα T t ∈ Rk (7)
For language modeling, we populate Mt with a fixed window of the previous K LSTM output vectors. To obtain a distribution over the next token we combine the context vector ct of the attention mechanism with the output vector ht of the LSTM using a trainable projection matrixWA ∈ Rk×2k. The resulting final output vector nt ∈ Rk encodes the next-word distribution and is projected to the size of the vocabulary |V |. Subsequently, we apply a softmax to arrive at a probability distribution yt ∈ R|V | over the next token. This process is presented in Equation 9 where WV ∈ R|V |×k and bV ∈ R|V | are trainable parameters.
nt = tanh ( WA [ ht ct ]) ∈ Rk (8)
yt = softmax(W V nt + b V ) ∈ R|V | (9)
The problem of the attention mechanism above is that it quickly becomes computationally expensive for large K. Moreover, attending over many memories can make training hard as a lot of noise is introduced in early stages of optimization where the LSTM outputs (and thus the memoryMt) are more or less random. To alleviate these problems we now turn to pointer networks and a simple heuristic for populating Mt that permits the efficient retrieval of identifiers in a large history of Python code.
2.3 SPARSE POINTER NETWORK
We develop an attention mechanism that provides a filtered view of a large history of Python tokens. At any given time step, the memory consists of context representations of the previous K identifiers introduced in the history. This allows us to model long-range dependencies found in identifier usage. For instance, a class identifier may be declared hundreds of lines of code before it is used. Given a history of Python tokens, we obtain a next-word distribution from a weighed average of the sparse pointer network for identifier reference and a standard neural language model. The weighting of the two is determined by a controller.
Formally, at time-step t, the sparse pointer network operates on a memoryMt ∈ Rk×K of only the K previous identifier representations (e.g. function identifiers, class identifiers and so on). In addition, we maintain a vectormt = [id1, . . . , idK ] ∈ NK of symbol ids for these identifier representations (i.e. pointers into the large global vocabulary).
As before, we calculate a context vector ct using the attention mechanism (Equation 7), but on a memoryMt only containing representations of identifiers that were declared in the history. Next, we obtain a pseudo-sparse distribution over the global vocabulary from
st[i] = { αt[j] ifmt[j] = i −C otherwise (10)
it = softmax(st) ∈ R|V | (11)
where−C is a large negative constant (e.g. −1000). In addition, we calculate a next-word distribution from a standard neural language model
yt = softmax(W V ht + bV ) ∈ R|V | (12)
and we use a controller to calculate a distribution λt ∈ R2 over the language model and pointer network for the final weighted next-word distribution y∗t via
hλt = [ ht xt ct ] ∈ R3k (13)
λt = softmax(W λhλt + b λ) ∈ R2 (14) y∗t = [yt it]λt ∈ R|V | (15)
Here, xt is the representation of the input token, and W λ ∈ R2×3k and bλ ∈ R2 a trainable weight matrix and bias respectively. This controller is conditioned on the input, output and context representations. This means for deciding whether to refer to an identifier or generate from the global vocabulary, the controller has access to information from the encoded next-word distribution ht of the standard neural language model, as well as the attention-weighted identifier representations ct from the current history.
Figure 1 overviews this process. In it, the identifier base_path appears twice, once as an argument to a function and once as a member of a class (denoted by *). Each appearance has a different id in the vocabulary and obtains a different probability from the model. In the example, the model correctly chooses to refer to the member of the class instead of the out-of-scope function argument, although, from a user point-of-view, the suggestion would be the same in both cases.
3 LARGE-SCALE PYTHON CORPUS
Previous work on code suggestion either focused on statically-typed languages (particularly Java) or trained on very small corpora. Thus, we decided to collect a new large-scale corpus of the dynamic programming language Python. According to the programming language popularity website Pypl (Carbonnelle, 2016), Python is the second most popular language after Java. It is also the 3rd most common language in terms of number of repositories on the open-source code repository GitHub, after JavaScript and Java (Zapponi, 2016).
We collected a corpus of 41M lines of Python code from GitHub projects. Ideally, we would like this corpus to only contain high-quality Python code, as our language model learns to suggest code from how users write code. However, it is difficult to automatically assess what constitutes high-quality code. Thus, we resort to the heuristic that popular code projects tend to be of good quality, There are
two metrics on GitHub that we can use for this purpose, namely stars (similar to bookmarks) and forks (copies of a repository that allow users to freely experiment with changes without affecting the original repository). Similar to Allamanis & Sutton (2013) and Allamanis et al. (2014), we select Python projects with more than 100 stars, sort by the number of forks descending, and take the top 1000 projects. We then removed projects that did not compile with Python3, leaving us with 949 projects. We split the corpus on the project level into train, dev, and test. Table 1 presents the corpus statistics.
3.1 NORMALIZATION OF IDENTIFIERS
Unsurprisingly, the long tail of words in the vocabulary consists of rare identifiers. To improve generalization, we normalize identifiers before feeding the resulting token stream to our models. That is, we replace every identifier name with an anonymous identifier indicating the identifier group (class, variable, argument, attribute or function) concatenated with a random number that makes the identifier unique in its scope. Note that we only replace novel identifiers defined within a file. Identifier references to external APIs and libraries are left untouched. Consistent with previous corpus creation for code suggestion (e.g. Khanh Dam et al., 2016; White et al., 2015), we replace numerical constant tokens with $NUM$, remove comments, reformat the code, and replace tokens appearing less than five times with an $OOV$ (out of vocabulary) token.
4 EXPERIMENTS
Although previous work by White et al. (2015) already established that a simple neural language model outperforms an n-gram model for code suggestion, we include a number of n-gram baselines to confirm this observation. Specifically, we use n-gram models for n ∈ {3, 4, 5, 6} with Modified Kneser-Ney smoothing (Kneser & Ney, 1995) from the Kyoto Language Modelling Toolkit (Neubig, 2012).
We train the sparse pointer network using mini-batch SGD with a batch size of 30 and truncated backpropagation through time (Werbos, 1990) with a history of 20 identifier representations. We use
an initial learning rate of 0.7 and decay it by 0.9 after every epoch. As additional baselines, we test a neural language model with LSTM units with and without attention. For the attention language models, we experiment with a fixed-window attention memory of the previous 20 and 50 tokens respectively, and a batch size of 75. We found during testing that the baseline models performed worse with the same batch size as the sparse pointer network of 30. We therefore chose to report the stronger results obtained with a batch size of 75.
All neural language models were developed in TensorFlow (Abadi et al., 2016) and trained using cross-entropy loss. While processing a Python source code file, the last recurrent state of the RNN is fed as the initial state of the subsequent sequence of the same file and reset between files. All models use an input and hidden size of 200, an LSTM forget gate bias of 1 (Jozefowicz et al., 2015), gradient norm clipping of 5 (Pascanu et al., 2013), and randomly initialized parameters in the interval (−0.05, 0.05). As regularizer, we use a dropout of 0.1 on the input representations. Furthermore, we use a sampled softmax (Jean et al., 2015) with a log-uniform sampling distribution and a sample size of 1000.
5 RESULTS
We evaluate all models using perplexity (PP), as well as accuracy of the top prediction (Acc) and the top five predictions (Acc@5). The results are summarized in Table 2.
We can confirm that for code suggestion neural models outperform n-gram language models by a large margin. Furthermore, adding attention improves the results substantially (2.3 lower perplexity and 3.4 percentage points increased accuracy). Interestingly, this increase can be attributed to a superior prediction of identifiers, which increased from an accuracy of 2.1% to 21.4%. An LSTM with an attention window of 50 gives us the best accuracy for the top prediction. We achieve further improvements for perplexity and accuracy of the top five predictions by using a sparse pointer network that uses a smaller memory of the past 20 identifier representations.
5.1 QUALITATIVE ANALYSIS
Figures 3a-d show a code suggestion example involving an identifier usage. While the LSTM baseline is uncertain about the next token, we get a sensible prediction by using attention or the sparse pointer network. The sparse pointer network provides more reasonable alternative suggestions beyond the correct top suggestion.
Figures 3e-h show the use-case referring to a class attribute declared 67 tokens in the past. Only the Sparse Pointer Network makes a good suggestion. Furthermore, the attention weights in 3i demonstrate that this model distinguished attributes from other groups of identifiers. We give a full example of a token-by-token suggestion of the Sparse Pointer Network in Figure 4 in the Appendix.
(a) Code snippet for referencing variable.
(b) LSTM Model. (c) LSTM w/ Attention 50. (d) Sparse Pointer Network.
6 RELATED WORK
Previous code suggestion work using methods from statistical NLP has mostly focused on n-gram models. Much of this work is inspired by Hindle et al. (2012) who argued that real programs fall into a much smaller space than the flexibility of programming languages allows. They were able to capture the repetitiveness and predictable statistical properties of real programs using language models. Subsequently, Tu et al. (2014) improved upon Hindle et al.’s work by adding a cache mechanism that allowed them to exploit locality stemming from the specialisation and decoupling of program modules. Tu et al.’s idea of adding a cache mechanism to the language model is specifically designed to exploit the properties of source code, and thus follows the same aim as the sparse attention mechanism introduced in this paper.
While the majority of preceding work trained on small corpora, Allamanis & Sutton (2013) created a corpus of 352M lines of Java code which they analysed with n-gram language models. The size of the corpus allowed them to train a single language model that was effective across multiple different project domains. White et al. (2015) later demonstrated that neural language models outperform n-gram models for code suggestion. They compared various n-gram models (up to nine grams), including Tu et al.’s cache model, with a basic RNN neural language model. Khanh Dam et al. (2016) compared White et al.’s basic RNN with LSTMs and found that the latter are better at code suggestion due to their improved ability to learn long-range dependencies found in source code. Our paper extends this line of work by introducing a sparse attention model that captures even longer dependencies.
The combination of lagged attention mechanisms with language modelling is inspired by Cheng et al. (2016) who equipped LSTM cells with a fixed-length memory tape rather than a single memory cell. They achieved promising results on the standard Penn Treebank benchmark corpus (Marcus et al., 1993). Similarly, Tran et al. added a memory block to LSTMs for language modelling of English, German and Italian and outperformed both n-gram and neural language models. Their memory encompasses representations of all possible words in the vocabulary rather than providing a sparse view as we do. Attention mechanisms were previously applied to the study of source code by Allamanis et al. who used a convolutional neural network combined with an attention mechanism to generate method names from bodies.
An alternative to our purely lexical approach to code suggestion involves the use of probabilistic context-free grammars (PCFGs) which exploit the formal grammar specifications and well-defined, deterministic parsers available for source code. These were used by Allamanis & Sutton (2014) to extract idiomatic patterns from source code. A weakness of PCFGs is their inability to model context-dependent rules of programming languages such as that variables need to be declared before being used. Maddison & Tarlow (2014) added context-aware variables to their PCFG model in order to capture such rules.
Ling et al. (2016) recently used a pointer network to generate code from natural language descriptions. Our use of a controller for deciding whether to generate from a language model or copy an identifier using a sparse pointer network is inspired by their latent code predictor. However, their inputs (textual descriptions) are short whereas code suggestion requires capturing very long-range dependencies that we addressed by a filtered view on the memory of previous identifier representations.
7 CONCLUSIONS AND FUTURE WORK
In this paper, we investigated neural language models for code suggestion of the dynamically-typed programming language Python. We released a corpus of 41M lines of Python crawled from GitHub and compared n-gram, standard neural language models, and attention. By using attention, we observed an order of magnitude more accurate prediction of identifiers. Furthermore, we proposed a sparse pointer network that can efficiently capture long-range dependencies by only operating on a filtered view of a memory of previous identifier representations. This model achieves the lowest perplexity and best accuracy among the top five predictions. The Python corpus and the code for our models is released at https://github.com/uclmr/pycodesuggest.
The presented methods were only tested for code suggestion within the same Python file. We are interested in scaling the approach to the level of entire code projects and collections thereof, as well as integrating a trained code suggestion model into an existing IDE. Furthermore, we plan to work on code completion, i.e., models that provide a likely continuation of a partial token, using character language models (Graves, 2013).
ACKNOWLEDGMENTS
This work was supported by Microsoft Research through its PhD Scholarship Programme, an Allen Distinguished Investigator Award, and a Marie Curie Career Integration Award. | 1. What is the focus of the paper regarding code suggestion improvement?
2. How does the proposed approach utilize pointer networks and sparse windows?
3. What are the advantages of using a sparse pointer method compared to attention mechanisms?
4. Why is the construction and filtering of the Python corpus important for future research?
5. Are there any limitations or potential issues with the proposed approach or experimental design? | Review | Review
This paper uses a pointer network over a sparse window of identifiers to improve code suggestion for dynamically-typed languages. Code suggestion seems an area where attention and/or pointers truly show an advantage in capturing long term dependencies.
The sparse pointer method does seem to provide better results than attention for similar window sizes - specifically, comparing a window size of 20 for the attention and sparse pointer method shows the sparse pointer winning fairly definitively across the board. Given a major advantage of the pointer method is being able to use a large window size well thanks to the supervision the pointer provides, it was unfortunate (though understandable due to potential memory issues) not to see larger window sizes. Having a different batch size for the sparse pointer and attention models is unfortunate given it complicates an otherwise straight comparison between the two models.
The construction and filtering of the Python corpus sounds promising but as of now it is still inaccessible (listed in the paper as TODO). Given that code suggestion seems an interesting area for future long term dependency work, it may be promising as an avenue for future task exploration.
Overall this paper and the dataset are likely an interesting contribution even though there are a few potential issues. |
ICLR | Title
Learning Python Code Suggestion with a Sparse Pointer Network
Abstract
To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very longrange dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past.
1 INTRODUCTION
Integrated development environments (IDEs) are essential tools for programmers. Especially when a developer is new to a codebase, one of their most useful features is code suggestion: given a piece of code as context, suggest a likely sequence of next tokens. Typically, the IDE suggests an identifier or a function call, including API calls. While extensive support exists for statically-typed languages such as Java, code suggestion for dynamic languages like Python is harder and less well supported because of the lack of type annotations. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code.
Recently, methods from statistical natural language processing (NLP) have been used to train code suggestion systems from code usage in large code repositories (Hindle et al., 2012; Allamanis & Sutton, 2013; Tu et al., 2014). To this end, usually an n-gram language model is trained to score possible completions. Neural language models for code suggestion (White et al., 2015; Das & Shah, 2015) have extended this line of work to capture more long-range dependencies. Yet, these standard neural language models are limited by the so-called hidden state bottleneck, i.e., all context information has to be stored in a fixed-dimensional internal vector representation. This limitation restricts such models to local phenomena and does not capture very long-range semantic relationships like suggesting calling a function that has been defined many tokens before.
To address these issues, we create a large corpus of 41M lines of Python code by using a heuristic for crawling high-quality code repositories from GitHub. We investigate, for the first time, the use of attention (Bahdanau et al., 2014) for code suggestion and find that, despite a substantial improvement
in accuracy, it still makes avoidable mistakes. Hence, we introduce a model that leverages long-range Python dependencies by selectively attending over the introduction of identifiers as determined by examining the Abstract Syntax Tree. The model is a form of pointer network (Vinyals et al., 2015a), and learns to dynamically choose between syntax-aware pointing for modeling long-range dependencies and free form generation to deal with local phenomena, based on the current context.
Our contributions are threefold: (i) We release a code suggestion corpus of 41M lines of Python code crawled from GitHub, (ii) We introduce a sparse attention mechanism that captures very long-range dependencies for code suggestion of this dynamic programming language efficiently, and (iii) We provide a qualitative analysis demonstrating that this model is indeed able to learn such long-range dependencies.
2 METHODS
We first revisit neural language models, before briefly describing how to extend such a language model with an attention mechanism. Then we introduce a sparse attention mechanism for a pointer network that can exploit the Python abstract syntax tree of the current context for code suggestion.
2.1 NEURAL LANGUAGE MODEL
Code suggestion can be approached by a language model that measures the probability of observing a sequence of tokens in a Python program. For example, for the sequence S = a1, . . . , aN , the joint probability of S factorizes according to
Pθ(S) = Pθ(a1) · N∏ t=2 Pθ(at | at−1, . . . , a1) (1)
where the parameters θ are estimated from a training corpus. Given a sequence of Python tokens, we seek to predict the next M tokens at+1, . . . , at+M that maximize Equation 1
argmax at+1, ..., at+M Pθ(a1, . . . , at, at+1, . . . , at+M ). (2)
In this work, we build upon neural language models using Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM, Hochreiter & Schmidhuber, 1997). This neural language model estimates the probabilities in Equation 1 using the output vector of an LSTM at time step t (denoted ht here) according to
Pθ(at = τ | at−1, . . . , a1) = exp (vTτ ht + bτ )∑ τ ′ exp (v T τ ′ht + bτ ′)
(3)
where vτ is a parameter vector associated with token τ in the vocabulary.
Neural language models can, in theory, capture long-term dependencies in token sequences through their internal memory. However, as this internal memory has fixed dimension and can be updated at every time step, such models often only capture local phenomena. In contrast, we are interested in very long-range dependencies like referring to a function identifier introduced many tokens in the past. For example, a function identifier may be introduced at the top of a file and only used near the bottom. In the following, we investigate various external memory architectures for neural code suggestion.
2.2 ATTENTION
A straight-forward approach to capturing long-range dependencies is to use a neural attention mechanism (Bahdanau et al., 2014) on the previous K output vectors of the language model. Attention mechanisms have been successfully applied to sequence-to-sequence tasks such as machine translation (Bahdanau et al., 2014), question-answering (Hermann et al., 2015), syntactic parsing (Vinyals et al., 2015b), as well as dual-sequence modeling like recognizing textual entailment (Rocktäschel et al., 2016). The idea is to overcome the hidden-state bottleneck by allowing referral back to previous output vectors. Recently, these mechanisms were applied to language modelling by Cheng et al. (2016) and Tran et al. (2016).
Formally, an attention mechanism with a fixed memory Mt ∈ Rk×K of K vectors mi ∈ Rk for i ∈ [1,K], produces an attention distribution αt ∈ RK and context vector ct ∈ Rk at each time step t according to Equations 4 to 7. Furthermore, WM ,W h ∈ Rk×k and w ∈ Rk are trainable parameters. Finally, note that 1K represents a K-dimensional vector of ones.
Mt = [m1 . . . mK ] ∈ Rk×K (4) Gt = tanh(WMMt + 1TK(W
hht)) ∈ Rk×K (5) αt = softmax(wTGt) ∈ R1×K (6) ct =Mtα T t ∈ Rk (7)
For language modeling, we populate Mt with a fixed window of the previous K LSTM output vectors. To obtain a distribution over the next token we combine the context vector ct of the attention mechanism with the output vector ht of the LSTM using a trainable projection matrixWA ∈ Rk×2k. The resulting final output vector nt ∈ Rk encodes the next-word distribution and is projected to the size of the vocabulary |V |. Subsequently, we apply a softmax to arrive at a probability distribution yt ∈ R|V | over the next token. This process is presented in Equation 9 where WV ∈ R|V |×k and bV ∈ R|V | are trainable parameters.
nt = tanh ( WA [ ht ct ]) ∈ Rk (8)
yt = softmax(W V nt + b V ) ∈ R|V | (9)
The problem of the attention mechanism above is that it quickly becomes computationally expensive for large K. Moreover, attending over many memories can make training hard as a lot of noise is introduced in early stages of optimization where the LSTM outputs (and thus the memoryMt) are more or less random. To alleviate these problems we now turn to pointer networks and a simple heuristic for populating Mt that permits the efficient retrieval of identifiers in a large history of Python code.
2.3 SPARSE POINTER NETWORK
We develop an attention mechanism that provides a filtered view of a large history of Python tokens. At any given time step, the memory consists of context representations of the previous K identifiers introduced in the history. This allows us to model long-range dependencies found in identifier usage. For instance, a class identifier may be declared hundreds of lines of code before it is used. Given a history of Python tokens, we obtain a next-word distribution from a weighed average of the sparse pointer network for identifier reference and a standard neural language model. The weighting of the two is determined by a controller.
Formally, at time-step t, the sparse pointer network operates on a memoryMt ∈ Rk×K of only the K previous identifier representations (e.g. function identifiers, class identifiers and so on). In addition, we maintain a vectormt = [id1, . . . , idK ] ∈ NK of symbol ids for these identifier representations (i.e. pointers into the large global vocabulary).
As before, we calculate a context vector ct using the attention mechanism (Equation 7), but on a memoryMt only containing representations of identifiers that were declared in the history. Next, we obtain a pseudo-sparse distribution over the global vocabulary from
st[i] = { αt[j] ifmt[j] = i −C otherwise (10)
it = softmax(st) ∈ R|V | (11)
where−C is a large negative constant (e.g. −1000). In addition, we calculate a next-word distribution from a standard neural language model
yt = softmax(W V ht + bV ) ∈ R|V | (12)
and we use a controller to calculate a distribution λt ∈ R2 over the language model and pointer network for the final weighted next-word distribution y∗t via
hλt = [ ht xt ct ] ∈ R3k (13)
λt = softmax(W λhλt + b λ) ∈ R2 (14) y∗t = [yt it]λt ∈ R|V | (15)
Here, xt is the representation of the input token, and W λ ∈ R2×3k and bλ ∈ R2 a trainable weight matrix and bias respectively. This controller is conditioned on the input, output and context representations. This means for deciding whether to refer to an identifier or generate from the global vocabulary, the controller has access to information from the encoded next-word distribution ht of the standard neural language model, as well as the attention-weighted identifier representations ct from the current history.
Figure 1 overviews this process. In it, the identifier base_path appears twice, once as an argument to a function and once as a member of a class (denoted by *). Each appearance has a different id in the vocabulary and obtains a different probability from the model. In the example, the model correctly chooses to refer to the member of the class instead of the out-of-scope function argument, although, from a user point-of-view, the suggestion would be the same in both cases.
3 LARGE-SCALE PYTHON CORPUS
Previous work on code suggestion either focused on statically-typed languages (particularly Java) or trained on very small corpora. Thus, we decided to collect a new large-scale corpus of the dynamic programming language Python. According to the programming language popularity website Pypl (Carbonnelle, 2016), Python is the second most popular language after Java. It is also the 3rd most common language in terms of number of repositories on the open-source code repository GitHub, after JavaScript and Java (Zapponi, 2016).
We collected a corpus of 41M lines of Python code from GitHub projects. Ideally, we would like this corpus to only contain high-quality Python code, as our language model learns to suggest code from how users write code. However, it is difficult to automatically assess what constitutes high-quality code. Thus, we resort to the heuristic that popular code projects tend to be of good quality, There are
two metrics on GitHub that we can use for this purpose, namely stars (similar to bookmarks) and forks (copies of a repository that allow users to freely experiment with changes without affecting the original repository). Similar to Allamanis & Sutton (2013) and Allamanis et al. (2014), we select Python projects with more than 100 stars, sort by the number of forks descending, and take the top 1000 projects. We then removed projects that did not compile with Python3, leaving us with 949 projects. We split the corpus on the project level into train, dev, and test. Table 1 presents the corpus statistics.
3.1 NORMALIZATION OF IDENTIFIERS
Unsurprisingly, the long tail of words in the vocabulary consists of rare identifiers. To improve generalization, we normalize identifiers before feeding the resulting token stream to our models. That is, we replace every identifier name with an anonymous identifier indicating the identifier group (class, variable, argument, attribute or function) concatenated with a random number that makes the identifier unique in its scope. Note that we only replace novel identifiers defined within a file. Identifier references to external APIs and libraries are left untouched. Consistent with previous corpus creation for code suggestion (e.g. Khanh Dam et al., 2016; White et al., 2015), we replace numerical constant tokens with $NUM$, remove comments, reformat the code, and replace tokens appearing less than five times with an $OOV$ (out of vocabulary) token.
4 EXPERIMENTS
Although previous work by White et al. (2015) already established that a simple neural language model outperforms an n-gram model for code suggestion, we include a number of n-gram baselines to confirm this observation. Specifically, we use n-gram models for n ∈ {3, 4, 5, 6} with Modified Kneser-Ney smoothing (Kneser & Ney, 1995) from the Kyoto Language Modelling Toolkit (Neubig, 2012).
We train the sparse pointer network using mini-batch SGD with a batch size of 30 and truncated backpropagation through time (Werbos, 1990) with a history of 20 identifier representations. We use
an initial learning rate of 0.7 and decay it by 0.9 after every epoch. As additional baselines, we test a neural language model with LSTM units with and without attention. For the attention language models, we experiment with a fixed-window attention memory of the previous 20 and 50 tokens respectively, and a batch size of 75. We found during testing that the baseline models performed worse with the same batch size as the sparse pointer network of 30. We therefore chose to report the stronger results obtained with a batch size of 75.
All neural language models were developed in TensorFlow (Abadi et al., 2016) and trained using cross-entropy loss. While processing a Python source code file, the last recurrent state of the RNN is fed as the initial state of the subsequent sequence of the same file and reset between files. All models use an input and hidden size of 200, an LSTM forget gate bias of 1 (Jozefowicz et al., 2015), gradient norm clipping of 5 (Pascanu et al., 2013), and randomly initialized parameters in the interval (−0.05, 0.05). As regularizer, we use a dropout of 0.1 on the input representations. Furthermore, we use a sampled softmax (Jean et al., 2015) with a log-uniform sampling distribution and a sample size of 1000.
5 RESULTS
We evaluate all models using perplexity (PP), as well as accuracy of the top prediction (Acc) and the top five predictions (Acc@5). The results are summarized in Table 2.
We can confirm that for code suggestion neural models outperform n-gram language models by a large margin. Furthermore, adding attention improves the results substantially (2.3 lower perplexity and 3.4 percentage points increased accuracy). Interestingly, this increase can be attributed to a superior prediction of identifiers, which increased from an accuracy of 2.1% to 21.4%. An LSTM with an attention window of 50 gives us the best accuracy for the top prediction. We achieve further improvements for perplexity and accuracy of the top five predictions by using a sparse pointer network that uses a smaller memory of the past 20 identifier representations.
5.1 QUALITATIVE ANALYSIS
Figures 3a-d show a code suggestion example involving an identifier usage. While the LSTM baseline is uncertain about the next token, we get a sensible prediction by using attention or the sparse pointer network. The sparse pointer network provides more reasonable alternative suggestions beyond the correct top suggestion.
Figures 3e-h show the use-case referring to a class attribute declared 67 tokens in the past. Only the Sparse Pointer Network makes a good suggestion. Furthermore, the attention weights in 3i demonstrate that this model distinguished attributes from other groups of identifiers. We give a full example of a token-by-token suggestion of the Sparse Pointer Network in Figure 4 in the Appendix.
(a) Code snippet for referencing variable.
(b) LSTM Model. (c) LSTM w/ Attention 50. (d) Sparse Pointer Network.
6 RELATED WORK
Previous code suggestion work using methods from statistical NLP has mostly focused on n-gram models. Much of this work is inspired by Hindle et al. (2012) who argued that real programs fall into a much smaller space than the flexibility of programming languages allows. They were able to capture the repetitiveness and predictable statistical properties of real programs using language models. Subsequently, Tu et al. (2014) improved upon Hindle et al.’s work by adding a cache mechanism that allowed them to exploit locality stemming from the specialisation and decoupling of program modules. Tu et al.’s idea of adding a cache mechanism to the language model is specifically designed to exploit the properties of source code, and thus follows the same aim as the sparse attention mechanism introduced in this paper.
While the majority of preceding work trained on small corpora, Allamanis & Sutton (2013) created a corpus of 352M lines of Java code which they analysed with n-gram language models. The size of the corpus allowed them to train a single language model that was effective across multiple different project domains. White et al. (2015) later demonstrated that neural language models outperform n-gram models for code suggestion. They compared various n-gram models (up to nine grams), including Tu et al.’s cache model, with a basic RNN neural language model. Khanh Dam et al. (2016) compared White et al.’s basic RNN with LSTMs and found that the latter are better at code suggestion due to their improved ability to learn long-range dependencies found in source code. Our paper extends this line of work by introducing a sparse attention model that captures even longer dependencies.
The combination of lagged attention mechanisms with language modelling is inspired by Cheng et al. (2016) who equipped LSTM cells with a fixed-length memory tape rather than a single memory cell. They achieved promising results on the standard Penn Treebank benchmark corpus (Marcus et al., 1993). Similarly, Tran et al. added a memory block to LSTMs for language modelling of English, German and Italian and outperformed both n-gram and neural language models. Their memory encompasses representations of all possible words in the vocabulary rather than providing a sparse view as we do. Attention mechanisms were previously applied to the study of source code by Allamanis et al. who used a convolutional neural network combined with an attention mechanism to generate method names from bodies.
An alternative to our purely lexical approach to code suggestion involves the use of probabilistic context-free grammars (PCFGs) which exploit the formal grammar specifications and well-defined, deterministic parsers available for source code. These were used by Allamanis & Sutton (2014) to extract idiomatic patterns from source code. A weakness of PCFGs is their inability to model context-dependent rules of programming languages such as that variables need to be declared before being used. Maddison & Tarlow (2014) added context-aware variables to their PCFG model in order to capture such rules.
Ling et al. (2016) recently used a pointer network to generate code from natural language descriptions. Our use of a controller for deciding whether to generate from a language model or copy an identifier using a sparse pointer network is inspired by their latent code predictor. However, their inputs (textual descriptions) are short whereas code suggestion requires capturing very long-range dependencies that we addressed by a filtered view on the memory of previous identifier representations.
7 CONCLUSIONS AND FUTURE WORK
In this paper, we investigated neural language models for code suggestion of the dynamically-typed programming language Python. We released a corpus of 41M lines of Python crawled from GitHub and compared n-gram, standard neural language models, and attention. By using attention, we observed an order of magnitude more accurate prediction of identifiers. Furthermore, we proposed a sparse pointer network that can efficiently capture long-range dependencies by only operating on a filtered view of a memory of previous identifier representations. This model achieves the lowest perplexity and best accuracy among the top five predictions. The Python corpus and the code for our models is released at https://github.com/uclmr/pycodesuggest.
The presented methods were only tested for code suggestion within the same Python file. We are interested in scaling the approach to the level of entire code projects and collections thereof, as well as integrating a trained code suggestion model into an existing IDE. Furthermore, we plan to work on code completion, i.e., models that provide a likely continuation of a partial token, using character language models (Graves, 2013).
ACKNOWLEDGMENTS
This work was supported by Microsoft Research through its PhD Scholarship Programme, an Allen Distinguished Investigator Award, and a Marie Curie Career Integration Award. | 1. What is the focus of the paper regarding source code and auto-regressive models?
2. What is the novel aspect of the paper's approach to token types and attention policies?
3. How does the reviewer assess the significance and impact of the proposed approach compared to prior works?
4. What are the strengths and weaknesses of the paper's contributions and methodology? | Review | Review
This paper takes a standard auto-regressive model of source code and augments it with a fixed attention policy that tracks the use of certain token types, like identifiers. Additionally they release a Python open source dataset. As expected this augmentation, the fixed attention policy, improves the perplexity of the model. It seems important to dig a bit deeper into these results and show the contribution of different token types to the achieve perplexity. This is alluded to in the text, but a more thorough comparison would be welcome. The idea of an attention policy that takes advantage of expert knowledge is a nice contribution, but perhaps if limited novelty --- for example the Maddison and Tarlow 2014 paper, which the authors cite, has scoping rules that track previously used identifiers in scope. |
ICLR | Title
Modular Continual Learning in a Unified Visual Environment
Abstract
A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly. Here, we describe a modular continual reinforcement learning paradigm inspired by these abilities. We first introduce a visual interaction environment that allows many types of tasks to be unified in a single framework. We then describe a reward map prediction scheme that learns new tasks robustly in the very large state and action spaces required by such an environment. We investigate how properties of module architecture influence efficiency of task learning, showing that a module motif incorporating specific design principles (e.g. early bottlenecks, low-order polynomial nonlinearities, and symmetry) significantly outperforms more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Finally, we present a meta-controller architecture for task switching based on a dynamic neural voting scheme, which allows new modules to use information learned from previouslyseen tasks to substantially improve their own learning efficiency.
INTRODUCTION
In the course of everyday functioning, people are constantly faced with real-world environments in which they are required to shift unpredictably between multiple, sometimes unfamiliar, tasks (Botvinick & Cohen, 2014). They are nonetheless able to flexibly adapt existing decision schemas or build new ones in response to these challenges (Arbib, 1992). How humans support such flexible learning and task switching is largely unknown, both neuroscientifically and algorithmically (Wagner et al., 1998; Cole et al., 2013).
We investigate solving this problem with a neural module approach in which simple, task-specialized decision modules are dynamically allocated on top of a largely-fixed underlying sensory system (Andreas et al., 2015; Hu et al., 2017). The sensory system computes a general-purpose visual representation from which the decision modules read. While this sensory backbone can be large, complex, and learned comparatively slowly with significant amounts of training data, the task modules that deploy information from the base representation must, in contrast, be lightweight, quick to be learned, and easy to switch between. In the case of visually-driven tasks, results from neuroscience and computer vision suggest the role of the fixed general purpose visual representation may be played by the ventral visual stream, modeled as a deep convolutional neural network (Yamins & DiCarlo, 2016; Razavian et al., 2014). However, the algorithmic basis for how to efficiently learn and dynamically deploy visual decision modules remains far from obvious.
In standard supervised learning, it is often assumed that the output space of a problem is prespecified in a manner that just happens to fit the task at hand – e.g. for a classification task, a discrete output with a fixed number of classes might be determined ahead of time, while for a continuous estimation problem, a one-dimensional real-valued target might be chosen instead. This is a very convenient simplification in supervised learning or single-task reinforcement learning contexts, but if one is interested in the learning and deployment of decision structures in a rich environment defining tasks with many different natural output types, this simplification becomes cumbersome.
To go beyond this limitation, we build a unified environment in which many different tasks are naturally embodied. Specifically, we model an agent interacting with a two-dimensional touchscreenlike GUI that we call the TouchStream, in which all tasks (discrete categorization tasks, continuous estimation problems, and many other combinations and variants thereof) can be encoded using a single common and intuitive – albeit large – output space. This choice frees us from having to hand-design or programmatically choose between different output domain spaces, but forces us to confront the core challenge of how a naive agent can quickly and emergently learn the implicit “interfaces” required to solve different tasks.
We then introduce Reward Map Prediction (ReMaP) networks, an algorithm for continual reinforcement learning that is able to discover implicit task-specific interfaces in large action spaces like those of the TouchStream environment. We address two major algorithmic challenges associated with learning ReMaP modules. First, what module architectural motifs allow for efficient task interface learning? We compare several candidate architectures and show that those incorporating certain intuitive design principles (e.g. early visual bottlenecks, low-order polynomial nonlinearities and symmetry-inducing concatenations) significantly outperform more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Second, what system architectures are effective for switching between tasks? We present a meta-controller architecture based on a dynamic neural voting scheme, allowing new modules to use information learned from previously-seen tasks to substantially improve their own learning efficiency.
In § 1 we formalize the TouchStream environment. In § 2, we introduce the ReMaP algorithm. In § 3, we describe and evaluate comparative performance of multiple ReMaP module architectures on a variety of TouchStream tasks. In § 4, we describe the Dynamic Neural Voting meta-controller, and evaluate its ability to efficiently transfer knowledge between ReMaP modules on task switches.
RELATED WORK
Modern deep convolutional neural networks have had significant impact on computer vision and artificial intelligence (Krizhevsky et al., 2012), as well as in the computational neuroscience of vision (Yamins & DiCarlo (2016)). There is a recent but growing literature on convnet-based neural modules, where they have been used for solving compositional visual reasoning tasks (Andreas et al., 2015; Hu et al., 2017). In this work we apply the idea of modules to solving visual learning challenges in a continual learning context. Existing works rely on choosing between a menu of pre-specified module primitives, using different module types to solve subproblems involving specific input-output datatypes, without addressing how these modules’ forms are to be discovered in the first place. In this paper, we show a single generic module architecture is capable of automatically learning to solve a wide variety of different tasks in a unified action/state space, and a simple controller scheme is able to switch between such modules.
Our results are also closely connected with the literature on lifelong (or continual) learning (Kirkpatrick et al., 2016; Rusu et al., 2016). A part of this literature is concerned with learning to solve new tasks without catastrophically forgetting how to solve old ones (Zenke et al., 2017; Kirkpatrick et al., 2016). The use of modules obviates this problem, but instead shifts the hard question to one of how newly-allocated modules can be learned effectively. The continual learning literature also directly addresses knowlege transfer to newly allocated structures (Chen et al., 2015; Rusu et al., 2016; Fernando et al., 2017), but largely addresses how transfer learning can lead to higher performance, rather than addressing how it can improve learning speed. Aside from reward performance, we focus on issues of speed in learning and task switching, motivated by the remarkably efficient adaptability of humans in new task contexts. Existing work in continual learning also largely does not address which specific architecture types learn tasks efficiently, independent of transfer. By focusing first on identifying architectures that achieve high performance quickly on individual tasks (§ 3), our transfer-learning investigation then naturally focuses more on how to efficiently identify when and how to re-use components of these architectures (§ 4). Most of these works also make explicit a priori assumptions about the structure of the tasks to be encoded into the models (e.g. output type, number of classes), rather than address the more general question of emergence of solutions in an embodied case, as we do.
Meta-reinforcement learning approaches such as Wang et al. (2016); Duan et al. (2016), as well as the schema learning ideas of e.g. Arbib (1992); McClelland (2013) typically seek to address the issue of continual learning by having a complex meta-learner extract correlations between tasks over a long timescale. In our context most of the burden of environment learning is placed on the individual modules, so our meta-controller can thus be comparatively light-weight compared to typical meta-reinforcement approaches. Unlike our case, meta-learning has mostly been limited to small state or action spaces. Some recent work in general reinforcement learning (e.g. Ostrovski et al. (2017); Dulac-Arnold et al. (2015)) has addressed the issue of large action spaces, but has not sought to address multitask transfer learning in these large action spaces.
1 THE TOUCHSTREAM ENVIRONMENT
Agents in a real-world environment are exposed to many different implicit tasks, arising without predefined decision structures, and must learn on the fly what the appropriate decision interfaces are
for each situation. Because we are interested in modeling how agents can do this on-the-fly learning, our task environment should mimic the unconstrained nature of the real world. Here, we describe the TouchStream environment, which attempts to do this in a simplified two-dimensional domain.
Our problem setup consists of two components, an “environment” and an “agent,” interacting over an extended temporal sequence (Fig. 1). At each timestep t, the environment emits an RGB image xt of height H and width W , and a scalar reward rt. Conversely, the agent accepts images and rewards as input and chooses an action at in response. The action space A available to the agent consists of a two-dimensional pixel grid {0, . . . ,H − 1}×{0, . . . ,W − 1} ⊂ Z2, of the same height and width as its input image. The environment is equipped with a policy (unknown to the agent) that on each time step computes image xt and reward rt as a function of the history of agent actions {a0, . . . , at−1}, images {x0, . . . , xt−1} and rewards {r0, . . . , rt−1}. In this work, the agent is a neural network, composed of a visual backbone with fixed weights, together with a meta-controller module whose parameters are learned by interaction with the environment. The agent’s goal is to learn to enact a policy that maximizes its reward obtained over time. Unlike an episodic reinforcement learning context, the TouchStream environment is continuous: throughout the course of learning the agent is never signaled when it should reset to some “initial” internal state. However, unlike the traditional continuous learning context of e.g. Sutton & Barto (1998), a TouchStream may implicitly define many different tasks, each of which is associated with its own characteristic reward schedule. The agent experiences a continual stream of tasks, and any implicit association between reward schedule and state reset must be discovered by the agent.
By framing the action space A of the agent as all possible pixel locations and the state space as any arbitrary image, a very wide range of possible tasks are unified in this single framework, at the cost of requiring the agents’ action space to be congruent to its input state space, and thus be quite large. This presents two core efficiency challenges for the agent: on any given task, it must be able to both quickly recognize what the “interface” for the task is, and transfer such knowledge across tasks in a smart way. Both of these goals are complicated by the fact that both the large size of agent’s state and action spaces.
Although we work with modern large-scale computer vision-style datasets and tasks in this work, e.g. ImageNet (Deng et al. (2009)) and MS-COCO (Lin et al. (2014)), we are also inspired by visual psychology and neuroscience, which have pioneered techniques for how controlled visual tasks can be embodied in real reinforcement learning paradigms (Horner et al., 2013; Rajalingham et al., 2015). Especially useful are three classes of task paradigms that span a range of the ways discrete and continuous estimation tasks can be formulated – including Stimulus-Response, Match-To-Sample, and Localization tasks (Fig. 2).
Stimulus-Response Tasks: The Stimulus-Response (SR) paradigm is a common approach to physically embodying discrete categorization tasks (Gaffan & Harrison, 1988). For example, in the simple two-way SR discrimination task shown in Fig. 2a, the agent is rewarded if it touches the left half of the screen after being shown an image of a dog, and the right half after being shown a butterfly. SR tasks can be made more difficult by increasing the number of image classes or the complexity of the reward boundary regions. In our SR experiments, we use images and classes from the ImageNet dataset (Deng et al., 2009).
Match-To-Sample Tasks: The Match-to-Sample (MTS) paradigm is another common approach to assessing visual categorization abilities (Murray & Mishkin, 1998). In the MTS task shown in Fig. 2b, trials consist of a sequence of two image frames – the “sample” screen followed by the “match” screen – in which the agent is expected to remember the object category seen on the sample frame, and then select an onscreen “button” (really, a patch of pixels) on the match screen corresponding to the sample screen category. Unlike SR tasks, MTS tasks require some working memory and more localized spatial control. More complex MTS tasks involve more sophisticated relationships between the sample and match screen. In Fig. 2c, using the MS-COCO object detection challenge dataset (Lin et al., 2014), the sample screen shows an isolated template image indicating one of the 80 MS-COCO classes, while the match screen shows a randomly-drawn scene from the dataset containing at least one instance of the sample-image class. The agent is rewarded if its chosen action is located inside the boundary of an instance (e.g. the agent “pokes inside”) of the correct class. This MS-COCO MTS task is a “hybrid” of categorical and continuous elements, meaning that if phrased as a standard
supervised learning problem, both categorical readout (i.e. class identity) and a continous readout (i.e. object location) would be required.
Localization: Fig. 2d shows a two-step continuous localization task in which the agent is supposed to mark out the bounding box of an object by touching opposite corners on two successive timesteps, with reward proportionate to the Intersection over Union (IoU) value of the predicted bounding box relative to the ground truth bounding box IoU = Area(BGT∩B̂)
Area(BGT∪B̂) . In localization, unlike the SR and
MTS paradigms, the choice made at one timestep constrains the agent’s optimal choice on a future timestep (e.g. picking the upper left corner of the bounding box on the first step contrains the lower right opposite corner to be chosen on the second).
Although these tasks can become arbitrarily complex along certain axes, the tasks presented here require only fixed-length memory and future prediction. That is, each task requires only knowledge of the past kb timesteps, and a perfect solution always exists within kf timesteps from any point. The minimal required values of kb and kf are different across the various tasks in this work. However, in the investigations below, we set these to the maximum required values across tasks, i.e. kb = 1 and kf = 2. Thus, the agent is required to learn for itself when it is safe to ignore information from the past and when it is irrelevant to predict past a certain point in the future.
We will begin by considering a restricted case where the environment runs one semantic task indefinitely, showing how different architectures learn to solve such individual tasks with dramatically different levels of efficiency (§ 2-3). We will then expand to considering the case where the environment’s policy consists of a sequence of tasks with unpredictable transitions between tasks, and exhibit a meta-controller that can cope effectively with this expanded domain (§ 4).
2 REWARD MAP PREDICTION
The TouchStream environment necessarily involves working with large action and state spaces. Methods for handling this situation often focus on reducing the effective size of action/state spaces, either via estimating pseudo-counts of state-action pairs, or by clustering actions (Ostrovski et al., 2017; Dulac-Arnold et al., 2015). Here we take another approach, using a neural network to directly approximate the (image-state modulated) mapping between the action space and reward space, allowing learnable regularities in the state-action interaction to implicitly reduce the large spaces into something manageable by simple choice policies. We introduce an off-policy algorithm for efficient multitask reinforcement learning in large action and state spaces: Reward Map Prediction, or ReMaP.
2.1 REMAP NETWORK ALGORITHM
As with any standard reinforcement learning situation, the agent seeks to learn an optimal policy π = p(at | xt) defining the probability density p over actions given image state xt. The ReMaP algorithm is off-policy, in that π is calculated as a simple fixed function of the estimated reward.
A ReMaP network MΘ is a neural network with parameters Θ, whose inputs are a history over previous timesteps of (i) the agent’s own actions, and (ii) an activation encoding of the agent’s state space; and which explicitly approximates the expected reward map across its action space for some number of future timesteps. Mathematically:
MΘ : [Ψt−kb:t,ht−kb:t−1] 7−→ [ m1t ,m 2 t , . . . ,m kf t ] where kb is the number of previous timesteps considered; kf is the length of future horizon to be considered; Ψt−kb:t is the history [ψ(xt−kb), . . . , ψ(xt)] of state space encodings produced by fixed backbone network ψ(·), ht−kb:t−1 is the history [at−kb . . . , at−1] of previously chosen actions, and each mi ∈ map(A,R) – that is, a map from action space to reward space. The predicted reward maps are constructed by computing the expected reward obtained for a subsample of actions drawn randomly from A:
mjt : at 7→ E [rt+j | at,ht−kb:t−1,Ψt−kb:t] = ∫ R rt+jp(rt+j | at,ht−kb:t−1,Ψt−kb:t). (1)
where rt+j is the predicted reward j steps into the future horizon. Having produced kf reward prediction maps, one for each timestep of its future horizon, the agent needs to determine what
it believes will be the single best action over all the expected reward maps [ m1t ,m 2 t , . . . ,m kf t ] .
The ReMaP algorithm formulates doing so by normalizing the predictions across each of these kf maps into separate probability distributions, and sampling an action from the distribution which has maximum variance. That is, the agent computes its policy π as follows:
π = VarArgmax kf j=1{Dist[Norm[m j t ]]}, (2)
where Norm[m] = m−min
x∈A m(x) (3)
is a normalization that removes the minimum of the map,
Dist[m] = f(m)∫
A f(m(x)) (4)
ensures it is a probability distribution parameterized by functional family f(·), and VarArgmax is an operator which chooses the input with largest variance.
The sampling procedure described in equation (2) uses two complementary ideas to exploit spatial and temporal structure to efficiently explore a large action space. Since rewards in real physical tasks are spatially correlated, the distribution-based sampler in Equation (4) allows for more effective exploration of potentially informative actions than would the single-point estimate of an apparent optimum (e.g. an -greedy policy). Further, in order to reduce uncertainty, the ReMaP algorithm explores timesteps with greatest reward map variance. The VarArgmax function nonlinearly upweights the timeframe with highest variance to exploit the fact that some points in time carry disproportianate relevance for reward outcome, somewhat analagously to how max-pooling operates in convolutional networks. Although any standard action selection strategy can be used in place of the one in (2) (e.g. pseudo -greedy over all kf maps), we have empirically found that this policy is effective at efficiently exploring our large action space.
The parameters Θ of a ReMaP network are learned by gradient descent on the loss of the reward prediction error Θ∗ = argminΘ L [mt(at), rt, ; Θ] with map m j t compared to the true reward rt+j . Only the reward prediction in mt corresponding to the action chosen at timestep t participates in loss calculation and backpropagation of error signals. A minibatch of maps, rewards, and actions is collected over several consecutive inference passes before performing a parameter update.
The ReMaP algorithm is summarized in 1.
Algorithm 1: ReMaP – Reward Map Prediction Initialize ReMaP network M Initialize state and action memory buffers Ψt−kb:t and ht−kb:t−1 for timestep t = 1,T do
Observe xt, encode with state space network ψ(·), and append to state buffer Subsample set of potential action choices at uniformly from A Produce kf expected reward maps of at from eq. (1) Select action according to policy π as in (2) Execute action at in environment, store in action buffer, and receive reward rt Calculate loss for this and previous kf − 1 timesteps if t ≡ 0 mod batch size then
Perform parameter update
Throughout this work, we take our fixed backbone state space encoder to be the VGG-16 convnet, pretrained on ImageNet (Simonyan & Zisserman, 2014). Because the resolution of the input to this network is 224x224 pixels, our action space A = {0, . . . , 223} × {0, . . . , 223}. By default, the functional family f used in the action selection scheme in Eq. (4) is the identity, although on tasks benefiting from high action precision (e.g. Localization or MS-COCO MTS), it is often optimal to sample a low-temperature Boltzmann distribution with f(x) = e−x/T . Reward prediction errors are calculated using the cross-entropy loss (where logits are smooth approximations to the Heaviside function in analogy to eq. (5)).
3 EFFICIENT NEURAL MODULES FOR TASK LEARNING
The main question we seek to address in this section is: what specific neural network structure(s) should be used in ReMaP modules? The key considerations are that such modules (i) should be easy to learn, requiring comparatively few training examples to discover optimal parameters Θ∗, and (ii) easy to learn from, meaning that an agent can quickly build a new module by reusing components of old ones.
Intuitive Example: As an intuition-building example, consider the case of a simple binary StimulusResponse task, as in Fig. 2a (“if you see a dog touch on the right, if a butterfly touch on the left"). One decision module that is a “perfect” reward predictor on this task is expressed analytically as:
M [Ψt](ax, ay) = H(ReLU(WΨt) ·ReLU(ax) + ReLU(−WΨt) ·ReLU(−ax)) (5) whereH is the Heaviside function, ax and ay are the x and y components of the action a ∈ A relative to the center of the screen, and W is a 1 × |Ψt| matrix expressing the class boundary (bias term omitted for clarity). If WΨt is positive (i.e. the image is of a dog) then ax must also be positive (i.e. touch is on the right) to predict positive reward; conversly, if WΨt is negative (i.e. butterfly), ax must be negative (i.e. left touch) to predict reward. If neither of these conditions hold, both terms are equal to zero, so the formula predicts no reward. Since vertical location of the action does not affect reward, ay is not involved in reward calculation on this task.
Equation (5) has three basic ideas embedded in its structure:
• there is an early visual bottleneck, in which the high-dimensional general purpose feature representation Ψt is greatly reduced in dimension (in this case, from the 4096 features of VGG’s FC6 layer, to 1) prior to combination with action space,
• there is a multiplicative interaction between the action vector and (bottlenecked) visual features, and
• there is symmetry, e.g. the first term of the formula is the sign-antisymmetric partner of the second term, reflecting something about the spatial structure of the task.
In the next sections, we show these three principles can be generalized into a parameterized family of networks from which the visual bottleneck (the W parameters), and decision structure (the form of equation (5)) can emerge naturally and efficienty via learning for any given task of interest.
3.1 THE EMS MODULE
In this section we define a generic ReMaP module which is lightweight, encodes all three generic design principles from the “perfect” formula, and uses only a small number of learnable parameters.
Define the concatenated square nonlinearity as
Sq : x 7−→ x⊕ x2
and the concatenated ReLU nonlinearity (Shang et al. (2016)) as
CReLU : x 7−→ ReLU(x)⊕ReLU(−x)
where ⊕ denotes vector concatenation. The CReS nonlinearity is then defined as the composition of CReLU and Sq, e.g.
CReS(x) : x 7−→ ReLU(x)⊕ReLU(−x)⊕ReLU2(x)⊕ReLU2(−x). The CReS nonlinearity introduces multiplicative interactions between its arguments via its Sq component and symmetry via its use of CReLU. Definition. The (n0, n1, . . . , nk)-Early Bottleneck-Multiplicative-Symmetric (EMS) module is the ReMaP module given by
B = CReLU(W0Ψ + b0)
l1 = CReS(W1(B ⊕ a) + b1) li = CReS(Wili−1 + bi) for i > 1
where Wi and bi are learnable parameters, Ψ are features from the fixed visual encoding network, and a is the action vector in A.
The EMS structure builds in each of the three principles described above. The B stage represents the early bottleneck in which visual encoding inputs are bottlenecked to size n0 before being combined with actions, and then performs k CReS stages, introducing multiplicative symmetric interactions between visual features and actions. From this, the “perfect” module definition for the binary SR task in eq. (5) then becomes a special case of a two-layer EMS module. Note that the visual features to be bottlenecked can be from any encoder; in practice, we work with both fully connected and convolutional features of the VGG-16 backbone.
In the experiments that follow, we compare the EMS module to a wide variety of alternative control motifs, in which the early bottleneck, multiplicative, and symmetric features are ablated. Multiplicative nonlinearity and bottleneck ablations use a spectrum of more standard activation functions, including ReLU, tanh, sigmoid, elu (Clevert et al., 2015), and CReLU forms. In late bottleneck (fully-ablated) architectures – which are, effectively, “standard” multi-layer perceptrons (MLPs) – action vectors are concatenated directly to the output of the visual encoder before being passed through subsequent stages. In all, we test 24 distinct architectures. Detailed information on each can be found in the Supplement.
3.2 EXPERIMENTS
We compared each architecture across 12 variants of visual SR, MTS, and localization tasks, using fixed visual encoding features from layer FC6 of VGG-16. Task variants ranged in complexity from simple (e.g. a binary SR task with ImageNet categories) to more challenging (e.g. a many-way ImageNet MTS task with result buttons appearing in varying positions on each trial). The most complex tasks are two variants of localization, either with a single main salient object placed on a complex background (similar to images used in Yamins & DiCarlo (2016)), or complex scenes from MS-COCO (see Fig. 3b). Details of the tasks used in these experiments can be found in the Supplement. Module weights were initialized using a normal distribution with µ = 0.0, σ = 0.01, and optimized using the ADAM algorithm (Kingma & Ba (2014)) with parameters β1 = 0.9, β2 = 0.999 and = 1e−8. Learning rates were optimized on a per-task, per-architecture basis in a cross-validated fashion. For each architecture and task, we ran optimizations from five different initialization seeds to obtain mean and standard error due to initial condition variability. For fully-ablated “late-bottleneck” modules, we measured the performance of modules of three different sizes (small, medium, and large), where the smallest version is equivalent in size to the EMS module, and the medium and large versions are much larger (Table S1).
Emergence of Decision Structures: A key feature of ReMaP modules is that they are able to discover de novo the underlying output domain spaces for a variety of qualitatively distinct tasks (Fig. 3; more examples in Fig. S2). The emergent decision structures are highly interpretable and reflect the true interfaces that the environment implicitly defines. The spatiotemporal patterns of learning are robust across tasks and replicable across initial seedings, and thus might serve as a candidate model of interface use and learning in humans. In general, we observe that the modules typically discover
the underlying “physical structures” needed to operate the task interface before learning the specific decision rules needed to solve the task.
For example, in the case of a discrete MTS categorization task (Fig. 3a), this involves the quick discovery of onscreen “buttons” corresponding to discrete action choices before these buttons are mapped to their semantic meaning. In the case of in the MS-COCO MTS task (Fig. 3b), we observe the initial discovery of high salience object boundaries, and followed by category-specific refinement. It is important to note that the visual backbone was trained on a categorization task, quite distinct from the localization task in MS-COCO MTS. Thus, the module had to learn this very different decision structure, as well as the class boundaries of MS-COCO, from scratch during training.
Efficiency of the EMS module: The efficiency of learning was measured by computing the taskaveraged, normalized area under the learning curve (TA-N-AUC) for each of the 24 modules tested, across all 12 task variants. Fig. 4a-d shows characteristic learning curves for several tasks, summarized in the table in Fig. 4e. Results for all architectures for all tasks are shown in Supplement Figure S1. We find that the EMS module is the most efficient across tasks (0.997 TA-N-AUC). Moreover, the EMS architecture always achieves the highest final reward level on each task.
Increasing ablations of the EMS structure lead to increasingly poor performance, both in terms of learning efficiency and final performance. Ablating the low-order polynomial interaction (replacing Sq with CReLU) had the largest negative effect on performance (0.818 TA-N-AUC), followed in importance by the symmetric structure (0.944 TA-N-AUC). Large fully-ablated models (no bottleneck, using only ReLU activations) performed significantly worse than the smaller EMS module and the single ablations (0.717 TA-N-AUC), but better than the module with neither symmetry nor multiplicative interactions (0.566 TA-N-AUC). Small fully-ablated modules with the same number of parameters as EMS were by far the least efficient (0.403 TA-N-AUC) and oftentimes achieved much lower final reward. In summary, the main conceptual features by which the special-case architecture in eq. (5) solves the binary SR task are both individually helpful, combine usefully, and can be parameterized and efficiently learned for a variety of visual tasks. These properties are critical to achieving effective task learning compared to standard MLP structures.
In a second experiment focusing on localization tasks, we tested an EMS module using convolutional features from the fixed VGG-16 feature encoder, reasoning that localization tasks could benefit from finer spatial feature resolution. We find that using visual features with explicit spatial information
substantially improves task performance and learning efficiency on these tasks (Fig. 5). To our knowledge, our results on MS-COCO are the first demonstrated use of reinforcement learning to achieve instance-level object segmentations. Reward curves (measuring bounding box IoU) in Fig. 5a show little difference between any of the late bottleneck modules at any size. The only models to consistently achieve an IoU above 0.4 are the EMS-like variants, especially with convolutional features. For context, a baseline SVR trained using supervised methods to directly regress bounding boxes using the same VGG features results in an IoU of 0.369.
4 DYNAMIC NEURAL VOTING FOR TASK SWITCHING
So far, we’ve considered the case where the TouchStream consists of only one task. However, agents in real environments are often faced with having to switch between tasks, many of which they may be encountering for the first time. Ideally, such agents would repurpose knowledge from previously learned tasks when it is relevant to a new task.
Formally, we now consider environment policies consisting of sequences of tasks T = {τ1, τ2, ..., τΩ}, each of which may last for an indeterminate period of time. Consider also a set of modulesM, where each module corresponds to a task-specific policy πω (a | x) = p (at | xt, τω). When a new task begins, we cue the agent to allocate a new module MΩ+1 which is added to the set of modulesM. In the learning that follows allocation, the weights in old modules are held fixed while the parameters in the new module MΩ+1 are trained. However, the output of the system is not merely the output of the new module, but instead is a dynamically allocated mixture of pathways through the computation graphs of the old and new modules. This mixture is determined by a meta-controller (Fig. 6). The meta-controller is itself a neural network which learns a dynamic distribution over (parts of) modules to be used in building the composite execution graph. Intuitively, this composite graph is composed of a small number of relevant pathways that mix and match parts of existing modules to solve the new task, potentially in combination with new module components that need to be learned.
4.1 DYNAMIC NEURAL VOTING
We define a meta-controller that assigns weights to each layer in each module inM. Let piω be the weight associated with the ith layer in module ω. These weights are probabilistic on a per layer basis, e.g. piω ≥ 0 and ∑ ω p i ω = 1 and can be interpreted as the probability of the controller selecting the ith layer liω for use in the execution graph, with distribution πi = {piω}. For such an assignment of weights, the composite execution graph defined by the meta-controller is generated by computing the sum of the activations of all the components at layer i weighted by the probabilities piω . These values are then passed on to the next layer where this process repeats. Mathematically, the composite layer at stage i can be expressed as
l̃iM = ∑ ω piωM i ω(l̃ i−1 M ) = Eπi [ M iω(l̃ i−1 M ) ] . (6)
1
1
1
1
1
1
1
where M iω(·) is the operator that computes the ith layer of module ω, and l̃0 := ψ(xt) is the original encoded input state.
The question now is, where do these probabilistic weights come from? The core of our procedure is a dynamic neural voting process in which the controller network learns a Boltzmann distribution over module activations to maximize reward prediction accuracy. This process is performed at each module layer, where the module weightings for a given layer are conditioned on the results of voting at the previous layer. That is,
pi = softmax [ W i (⊕ ω M iω(l̃ i−1 M ) ) + bi ] (7)
where pi = (pi0, p i 1, ..., p i Ω) are the module weights at layer i, ⊕ is concatenation, and W i ∈ R(Ω·L)×Ω is a learnable weight matrix of the controller.
This voting procedure operates in an online fashion, such that the controller is continously learning its meta-policy while the agent is taking actions. As defined, the meta-controller constitutes a fully-differentiable neural network and is learned by gradient descent online.
A useful refinement of the above mechanism involves voting across the units ofM. Specifically, the meta-controller now assigns probabilistic weights pi,jω to neuron n i,j ω (the jth unit in layer i of module ω). In contrast to the layer-voting scheme, the dynamically generated execution graph computed by the meta controller now becomes composite neurons with activations:
ñi,jM = ∑ ω pi,jω M i,j ω (l̃ i−1 M ) = Eπi,j [ M i,jω (l̃ i−1 M ) ] . (8)
which are concatenated to form the composite layer l̃iM. The generalization of equation (7) to the single-unit voting scheme then becomes:
pi,j = softmax [ W i,j (⊕ ω M i,jω (l̃ i−1 M ) ) + bi,j ] (9)
where pi,j = (pi,j0 , p i,j 1 , ..., p i,j Ω ) are the unit-level weights across modules, and W i,j ∈ RΩ×Ω.
Empirically, we find that the initialization schemes of the learnable controller parameters are an important consideration in the design, and that two specialized transformations also contribute slightly to its overall efficiency. For details on these, please refer to the Supplement.
The dynamic neural voting mechanism achieves meta-control through a neural network optimized online via gradient descent while the modules are solving tasks, rather than a genetic algorithm that operates over a longer timescale as in the work of Fernando et al. (2017). Moreover, in contrast to the work of Rusu et al. (2016) the voting mechanism eliminates the need for fully-connected adaptation layers between modules, thus substantially reducing the number of parameters required for transfer.
4.2 SWITCHING EXPERIMENTS
R eu se Fr ac tio
n
10 20 4030 Batch Updates 0
R ew
ar d
From Scratch After Switch
0.6 0.8 1.0
0.6 0.8 1.0
0.2 0.4
Figure 7: Dyanmic Neural Voting quickly corrects for “no-switch” switches. Although a new module is allocated for each task transition, if the new task is identitcal to the original task, the controller quickly learns to reuse the old module components. Top: postswitching learning curve for the EMS module on a binary stimulus-response task, after being trained on the same task. For clarity, only the Layer Voting method is compared against a baseline module trained from scratch. Bottom: fraction of the original module reused over the course of post-switch learning, calculated by averaging the voting weights of each layer in the original module.
“No-switch” switches: Our first experiments tested how the dynamic neural voting mechanism would respond to “no-switch” switches, i.e. ones in which although a switch cue was given and a new module allocated, the environment policy’s task did not actually change (Fig 7). We find that in such cases, performance almost instantly approaches pre-switch levels (e.g. there is very little penalty in attempting an uneccessary switch). Moreover, we find that the weightings the controller applies to the new module is low: in other words, the system recognizes that no new module is needed and acts accordingly by concentrating its weights on the existing module. These results show that, while we formally assume that the agent is cued as when task switches occurs, in theory it could implement a completely autonomous monitoring policy, in which the agent simply runs the allocation procedure if a performance “anomoly” occurs (e.g. a sustained drop in reward). If the system determines that the new module was unneeded, it could simply reallocate the new module for a later task switch. In future work, we plan to implement this policy explicitly.
“Real” switches: We next tested how the dynamic voting controller handled switches in which the environment policy substantially changed after the switching cue. Using both the EMS module and (for control) the large fully-ablated module as described in § 3.2, the dynamic neural voting controller was evaluated on 15 switching experiments using multiple variants of SR and MTS tasks. Specifically, these 15 switches cover a variety of distinct (but not mutually exclusive) switching types including:
• addition of new classes to the dataset (switch indexes 2, 7, 11 in the table of Fig. 8) • replacing the current class set entirely with a new non-overlapping class set (switch ids. 1, 3) • addition of visual variability to a previously less variable task (switch id. 6) • addition of visual interface elements e.g. new buttons (switch id. 8) • transformation of interface elements e.g. screen rotation (switch ids. 12, 13, 14, 15) • transitions between different task paradigms e.g. SR to MTS tasks and vice-versa (switch ids. 4, 5,
9, 10).
Controller hyperparameters were optimized in a cross-validated fashion (see Appendix G.1), and optimizations for three different initialization seeds were run to obtain mean and standard error.
Figures 8a and b show characteristic post-switch learning curves for the EMS module for both the Layer Voting and Single-Unit Voting methods. Additional switching curves can be found in the Supplement. Cumulative reward gains relative to learning from scratch were quantified by Relative
Gain in AUC: RGain = AUC(M
switch)−AUC(M) AUC(M) , where M is the module trained from scratch on
R ew
ar d
R el
at iv
e A
U C
G ai
n
Single-Unit Voting Layer Voting EMS
Single-Unit Voting Layer Voting
None (Large)
1 2 3 4 5 6 7 8 9 10 11 12
10 20 504030 10 20 504030
0.0
0.1
0.2
0.4
0.3
0.5
0.2 0.4 0.6
1.0 0.8
1.2
0.2
0.4
0.6
0.8
EMS Single-Unit Voting Layer Voting
a b
c d
e
Tr an
sf er
G ai
n
0.0
0.1
0.2
0.4
0.3
0.5
1 142 103 84 95 6 7 11 12 13 15
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 13 14 15
1.4
Base Task Switch Task Base Task Switch Task 1. 2-way SR 2-way SR new classes 9. 4-way double-binary SR 4-way 4-shown stationary MTS 2. 2-way SR 4-way double-binary SR 10. 4-way 4-shown stationary MTS t 4-way quadrant SR 3. 2-way stationary MTS 2-way stationary MTS new classes 11. 2-way SR 4-way quadrant SR 4. 2-way SR 2-way stationary MTS 12. 4-way double-binary SR 4-way quadrant SR 5. 2-way stationary MTS 2-way SR 13. 2-way SR class reversal 6. 2-way stationary MTS 2-way vert-motion horiz-flip MTS 14. 2-way SR squeezed map 7. 2-way vert-motion horiz-flip MTS 4-way 2-shown vert-motion MTS 15. 2-way SR 90◦ map rotation 8. 4-way 2-shown vert-motion MTS 4-way 4-shown permuted MTS
Task switching with Dynamic Neural Voting. Post-Switching learning curves for the EMS module on the 4-way Quadrant SR task after learning a. 2-way SR task and b. a 4-way MTS task with 4 match screen class templates. Both the Layer Voting method and Single-Unit Voting method are compared against a baseline module trained on the second task from scratch. Across all twelve task switches, we evaluate the Relative Gain in AUC over baseline (RGain) using both voting methods for c. the EMS module and d. the large-sized fully-ablated late bottleneck MLP. e. Transfer Gain (TGain) metrics are compared for both module types for each of the voting mechanisms. Colors are as in c. (EMS module) and d. (fully-ablated module).
Figure 8
the second task, and Mswitch is the module transferred from an initial task using the dynamic voting controller. We find that the dynamic voting controller allows for rapid positive transfer of both module types across all 15 task switches, and the general Single-Unit voting method is a somewhat better transfer mechanism than the Layer Voting method (Fig. 8c). Both the EMS module and the large fully-ablated module, which was shown to be inefficient on single-task performance in § 3.2, benefit from dynamic neural voting (Fig. 8 d).
EMS modules are more “switchable”: To quantify how fast switching gains are realized, we use Transfer Gain: TGain = ∆maxT∆max , where T∆max = argmax(∆t) is the time where the maximum amount of reward difference between Mswitch and M occurs, and ∆max is the reward difference at that time. Qualitatively, a high score on the Transfer Gain metric indicates that a large amount of relative reward improvement has been achieved in a short amount of time (see Figure S7 for a graphical illustration of the relationship between the RGain and TGain metrics). While both the EMS and large fully-ablated modules have positive Transfer Gain, EMS scores significantly higher on this metric, i.e. is significantly more “switchable” than the large fully-ablated module (Fig. 8e). We hypothesize that this is due to the EMS module being able to achieve high task performance with significantly fewer units than the larger fully-ablated module, making the former easier for the dynamic neural voting controller to operate on.
5 CONCLUSION AND FUTURE DIRECTIONS
In this work, we introduce the TouchStream environment, a continual reinforcement learning framework that unifies a wide variety of spatial decision-making tasks within a single context. We describe a general algorithm (ReMaP) for learning light-weight neural modules that discover implicit task interfaces within this large-action/state-space environment. We show that a particular module architecture (EMS) is able to remain compact while retaining high task performance, and thus is especially suitable for flexible task learning and switching. We also describe a simple but general dynamic task-switching architecture that shows substantial ability to transfer knowledge when modules for new tasks are learned.
A crucial future direction will be to expand insights from the current work into a more complete continual-learning agent. We will need to show that our approach scales to handle dozens or hundreds of task switches in sequence. We will also need to address issues of how the agent determines when to build a new module and how to consolidate modules when appropriate (e.g. when a series of tasks previously understood as separate can be solved by a single smaller structure). It will also be critical to extend our approach to handle visual tasks with longer horizons, such as navigation or game play with extended strategic planning, which will likely require the use of recurrent memory stores as part of the feature encoder.
From an application point of view, we are particularly interested in using techniques like those described here to produce agents that can autonomously discover and operate the interfaces present in many important real-world two-dimensional problem domains, such as on smartphones or the internet (Grossman, 2007). We also expect many of the same spatially-informed techniques that enable our ReMaP/EMS modules to perform well in the 2-D TouchStream environment will also transfer naturally to a three-dimensional context, where autonomous robotics applications (Devin et al., 2016) are very compelling.
SUPPLEMENTARY MATERIAL
A TASK VARIANTS
The EMS module and all ablation controls were evaluated on a suite of 13 stimulus-response, match-to-sample, localization, and MS-COCO MTS variants:
1. 2-way SR - standard binary SR task 2. 4-way double binary SR - four class variant of SR, where each class is assigned either to the right
or left half of the action space 3. 4-way quadrant SR - four class variant of SR, where each class is assigned to only a quadrant of
the action space 4. 2-way stationary MTS - standard binary MTS task with stereotyped and non-moving match screens
5. 2-way stationary horiz-flip MTS - two class variant MTS task where the match templates’ horizontal placement is randomly chosen, but confined within the same vertical plane
6. 2-way stationary vert-motion MTS - two class variant MTS task where the match templates’ vertical position is randomly chosen, but each class is confined to a specific side
7. 2-way stationary vert-motion horiz-flip MTS - two class variant MTS task where the match templates’ positions are completely random
8. 4-way 2-shown MTS - four class variant MTS task where only two class templates are shown on the match screen (appearing with random horizontal location as well)
9. 4-way 2-shown vert-motion MTS - same as above, but with random vertical motion for the templates
10. 4-way 4-shown stationary MTS - four class variant MTS task where all four class templates are shown on the match screen, but with fixed positions.
11. 4-way 4-shown permuted MTS - same as above, but with randomly permuted locations of all match templates
12. Localization - Localization task 13. MS-COCO MTS - 80-way MTS task using the MS-COCO detection challenge dataset, where
match screens are randomly samples scenes from the dataset
B EXPERIMENT DETAILS AND DATASETS
Stimulus-Response Experiment Details: Image categories used are drawn from the Image-Net 2012 ILSVR classification challenge dataset Deng et al. (2009). Four unique object classes are taken from the dataset: Boston Terrier, Monarch Butterfly, Race Car, and Panda Bear. Each class has 1300 unique training instances, and 50 unique validation instances.
Match-To-Sample Experiment Details: Sample screen images drawn from the same Image-Net class set as the Stimulus-Response tasks. One face-centered, unobstructed class instance is also drawn from the Image-Net classification challenge set and used as a match screen template image for that class. Class template images for the match screen were held fixed at 100x100 pixels. For all variants of the MTS task, we keep a six pixel buffer between the edges of the screen and the match images, and a twelve pixel buffer between the adjascent edges of the match images themselves. Variants without vertical motion have the match images vertically centered on the screen.
Localization Experiment Details: The Localization task uses synthetic images containing a single main salient object placed on a complex background (similar to images used in Yamins & DiCarlo (2016); Yamins et al. (2014)). There are a total of 59 unique classes in this dataset. In contrast to other single-class localization datasets (e.g. Image-Net) which are designed to have one large, face-centered, and centrally-focused object instance and for which a trivial policy of “always poke in image corners” could be learned, this synthetic image set offers larger variance in instance scale, position, and rotation so the agent is forced into learning non-trivial policies requiring larger precision in action selection.
MS-COCO MTS Experiment Details This task uses the entire MS-COCO detection challenge dataset Lin et al. (2014). On every timestep, a sample screen chosen from one of the 80 MS-COCO classes. These are constructed to be large, unobstructed, face centered representations of the class. For the match screen, we sample a random scene from MS-COCO containing any number of objects, but containing at least a single instance of the sample class. The agent is rewarded if its action is located inside any instance of the correct class. Both modules use sample actions from a low-temperature Boltzmann policy from eq. (4), which was empirically found to result in more precise reward map prediction.
C MODULES
C.1 UNITS PER LAYER
Table S1 aggregates the number of units per layer for the EMS and ablated modules which was used when conducting single-task and task-switching experiments. Only fully-connected modules’ layer sizes are shown here. For details on the convolutional bottleneck EMS module, please refer to C.2.
Table S1: Number of units per layer for investigated modules
Base-task EMS No symm No Mult
No mult/symm NoneSmall NoneMed NoneLarge
SR 8 8 8 8 8 128 512 MTS 32 32 32 32 32 128 512 LOC 128 128 128 128 128 512 1024
2-way SR
4-way double binary SR
4-way quadrant SR
2-way stationary MTS
2-way vert. motion MTS
2-way horiz. flip MTS
2-way motion/flip MTS
4-way 2-shown MTS
4-way 2-shown vert-motion MTS
4-way 4-shown stationary MTS
4-way 4-shown permuted MTS
Localization
P ar
tia l s
ym m E M
S
N o
sy m m N o sy m m /p at ia l m
ul t
N on
e C
R eL
u (la
rg e)
N o
m ul
t/s ym
m e Lu N o m ul t/s ym m R eL u N on e R eL u (m ed iu m ) N o m ul t N on e eL u (la rg e) N on e R eL u (la rg e) N on e C R eL u (m ed iu m ) N on e eL u (m ed iu m ) N on e C R eL u (s m al l) N on e ta nh (l ar ge ) N on e si g (la rg e) N o m ul t/s ym m ta nh
N on
e eL
u (s
m al
l)
N o
m ul
t/s ym
m s ig N on e R eL u (s m al
l)
N on
e si
g (m
ed iu
m )
N on
e ta
nh (m
ed iu
m )
N on
e si
g (s
m al
l)
N on
e ta
nh (s
m al
l)
Normalized Validation AUC
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
Figure S1: Exhaustive module performance study of the EMS module and 23 ablation control modules, measured as the Area Under the Curve for all SR, MTS, and LOC task variants. Shown is the AUC normalized to the highest performing module in a task. Results in fig. 4 have further averaged this over the vertical task axis, and report only a salient subset of the ablations.
C.2 THE CONVOLUTIONAL-EMS MODULE
This is a "Convolutional Bottleneck" extension of the EMS module shown in the paper, where skip connections link the conv5 and the FC6 representation of the visual backbone. Here, the "scenelevel" representation stored in the FC6 ReMaP memory buffer is tiled spatially to match the present convolution dimensions (here 14x14), and concatenated onto its channel dimension. A series of 1x1 convolutions plays the role of a shallow visual bottleneck, before the activations are vectorized and concatenated with A as input to the CReS layers of the standard EMS module. The results in the paper are shown for a bottleneck consisting of a single tanh and two CReS convolutions, with 128 units each. The Downstream layers use 128 units each as well.
The motivation for the convolutional bottleneck is that lower-level features are useful for complex spatial tasks such as Localization and Object Detection, and hence may result in a more precise policy. By tiling the entire scene-level representation along the convolution layer’s channel dimension, a form of multiplicative template-matching is possible between objects that must be memorized (e.g. MS-COCO MTS templates) and what is inside the present scene.
D EXHAUSTIVE ABLATION STUDY
In all, we investigated 23 distinct ablations on the EMS module, across all twelve task variants outlined in sec A (Fig. S1). Symmetry ablations replace CReS with the activation x 7→ ReLU(x) ⊕ x2 Multiplicative ablations are denoted by specifying the nonlinearity used in place of CReS (where this is one of ReLU, tanh, sigmoid, elu Clevert et al. (2015), or CReLU Shang et al. (2016)). This additionally includes one partial symmetry ablation (denoted “partial symm”) where only the visual bottleneck is symmetric, and one which ablates the ReLU from the “no symm” module (denoted “no symm/partial-mult”).
Table S2: Module learning rates
2-way SR
4-way double binary SR
4-way stationary SR
2-way stationary MTS
2-way vertmotion MTS
2-way horiz flip MTS 2-way motion/flip MTS
4-way 2-shown MTS 4-way 2-shown vertmotion MTS 4-way 4-shown stationary MTS 4-way 4-shown permuted MTS LOC
EMS 10−3 10−3 10−3 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 10−4 Partial symm 10−3 10−3 10−3 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 10−4 No symm 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 2·10−4 10−4 No symm/partial mult 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 2·10−4 10−4 No mult/symm ReLU 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 No mult/symm tanh 10−3 10−3 10−3 10−3 10−4 10−3 10−3 10−3 10−3 10−4 10−4 10−4 No mult/symm sig 10−3 10−3 10−3 10−3 10−4 10−3 10−3 10−3 10−3 10−4 10−4 10−4 No mult/symm eLU 10−3 10−3 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 No mult/symm CReLU 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 None ReLU(small) 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 None ReLU(medium) 10−3 10−3 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None ReLU(large) 10−3 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None tanh(small) 10−4 10−4 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−4 10−4 10−4 None tanh(medium) 10−4 10−4 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−4 10−4 10−4 None tanh(large) 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None sig(small) 10−4 10−4 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−4 10−3 10−4 None sig(medium) 10−4 10−4 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−4 10−4 10−4 None sig(large) 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None eLU(small) 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 None eLU(medium) 10−3 10−3 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−3 10−4 10−4 None eLU(large) 10−4 10−4 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None CReLU(small) 10−3 10−3 10−3 10−3 10−3 10−4 10−3 10−3 10−3 10−3 10−3 10−4 None CReLU(medium) 10−3 10−3 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−3 10−4 10−4 None CReLU(large) 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4
D.1 HYPERPARAMETERS
Learning rates for the ADAM optimizer were chosen on a per-task basis through cross-validation on a grid between [10−4,10−3] for each architecture. Values used in the present study may be seen in Table S2.
E ADDITIONAL MS-COCO REWARD MAPS
Five additional reward map examples for the MS-COCO MTS task are provided in in Figure S2. Examples are plotted over the course of learning.
F ADDITIONAL LEARNING CURVES
F.1 SINGLE-TASK ABLATION EXPERIMENTS
Learning trajectories for seven additional tasks are provided in Figure S3. Modules capable of convergence on a task were run until this was acheived, but AUC values for a given task are calculated at the point in time when the majority of models converge.
F.2 DYNAMIC VOTING CONTROLLER AND EMS MODULE TASK-SWITCHING EXPERIMENTS
Additional trajectories for ten unshown switching curves are provided in Figure S4.
G DYNAMIC VOTING CONTROLLER AUGMENTATIONS
G.1 LEARNABLE PARAMETER INITIALIZATIONS
Here we describe the weight initialization scheme that was found to be optimal for use with the dynamic voting controller. For simplicity, consider the layer-voting mechanism, with learnable
64 128 704 1344 1984
64 128 704 1344 1984
64 704 1984 2624 3904
64 704 1344 2176 3904
Sample Match
64 704 1344 6016 8384
Reward Maps
Training episode (in thousands)
Figure S2: Examples of the emergence of decision interfaces in MSCOCO MTS Reward map predictions over the course of training for 5 different object classes.
2-way SR 4-way double binary SR
2-way stationary MTS 2-way horiz-flip MTS
2-way vert-motion MTS 4-way 2-shown MTS
4-way 4-shown stationary MTS EMS
No symm No mult
None (large) No mult/symm None (medium)
None (small) Training Episodes
R ew
ar d
Figure S3: Additional Single-task performance ablation Learning curves. Seven learning curves shown for task variants not seen in the main text body. Shown are the same ablations as the main text.
R ew
ar d
Training Episodes
2-way SR to 2-way SR new classes 2-way SR to 4-way double binary SR
2-way stationary MTS to 2-way stationary MTS new classes 2-way SR to 2-way stationary MTS
2-way stationary MTS to 2-way SR 2-way stationary MTS to 2-way vert-motion horiz-flip MTS
4-way 2-shown vert-motion MTS to 4-way 4-shown permuted MTS2-way vert-motion horiz-flip MTS to 4-way 2-shown vert-motion MTS
4-way double binary SR to 4-way 4-shown stationary MTS 4-way double binary SR to 4-way quadrant SR
EMS Single-Unit Voting Layer Voting
Figure S4: Additional Switching curves. Ten additional learning curves for unshown task switches in main text. Shown are the both Single-Unit and Layer Voting implementations of the dynamic voting controller with the EMS module. “EMS” denotes a module trained on the second task from scratch.
22
weight matricies W i and biases bi. The intended biasing scheme is achieved through initializing the elements of these parameters to:
W iω ∼ |N (µ0, 0.001)| if i = 1, ω < Ω |N (µ1, 0.001)| if i > 1, ω < Ω |N (0.01, 0.001)| if ω = Ω
(10)
biω = b0 if i = 1, ω < Ω
b1 if i > 1, ω < Ω 0.1 if ω = Ω
(11)
This initialization technique was also generalized for use with the single-unit voting mechanism.
For the switching experiments presented in section § 4.2, we sweep the hyperparameters on a narrow band around the default scheme. The ranges for these are: µ0 ∈ [0.01, 0.005] , b0 ∈ [0.1, 0.01] , µ1 ∈ [0.01, 0.02] , and b1 ∈ [0.1, 0.2, 0.5, 1.0].
G.2 TARGETED TRANSFORMATIONS
Two additional switching mechanisms were added to the controller to augment its ability to switch between taks which are remappings of the action space or reward policy of a preexisting module.
G.2.1 ACTION TRANSFORMATIONS
we note that efficient modules are those which can effectively produce a minimal representation of the interaction between action space A and observation xt. If the agent’s optimal action space shifts to A′ while the remainder of the task context remains fixed, the controller should allow for rapid targeted remapping A 7→ A′. Since we formulate the modules as ReMaP Networks, and A is an input feature basis, we can achieve remappings of this form through a fully-connected transformation:
a′τ = f(Waaτ + b) (12)
where aτ = [ht−kb:t−1, at] is the vector of action histories, and Wa and b embed aτ into new action space A′ using only a small number of learnable parameters. Pseudo Identity-Preserving Transformation In practice, we initialize the parameters in eq. (12) such that the transformation is pseudo identity-preseving, meaning that the representation learned at this level in the original module is not destroyed prior to transfer.
This is done by initializing Wa to be an identity matrix I|aτ | with a small amount of Gaussian noise ∼ N (0.0, σ2) added to break symmetry. b is initialized to be a vector of ones of size |aτ |.
G.2.2 REWARD MAP TRANSFORMATIONS
Each of the kf maps mt(x) reflects the agent’s uncertainty in the environment’s reward policy. If the task context remains stationary, but the environment transitions to new reward scheduleR′ that no longer aligns with the module’s policy π, the controller could to this transition by e.g. containing a mechanism allowing for targeted transformation of m(x) and hence also π.
One complication that arises under ReMaP is that since each task-module learns its optimal action space internally, m(x) are in the basis of R rather than A. Therefore, transformations on the map distribution must also re-encode A before mapping toR′. In this work, we investigate a shallow “adapter” neural network that lives on top of the existing module and mapsR 7→ R′. Its first and second layers are defined by
l1(x) = f(W1[m(x) g(aτ ), aτ ] + b1 (13)
m(x)′ ∝W2l1 + b2 (14)
where g(aτ ) is a similar transformation onA as above, denotes elementwise multiplication,W1 is a learnable matrix embedding into a hidden state, and W2 ∈ R|l1|×|R
′| is a learnable matrix embedding intoR′
Pseudo Identity-Preserving Transformation Similar to the transformation on the action space, we modify the reward-map transformation to be pseudo identity-preserving as well. This is done by modifying eq. (13) such that the original maps are concatenated on to the beginning of the transformation input vector:
l1(x) = f(W1[m(x),m(x) g(aτ ), aτ ] + b1 (15)
The intended map-preserving transformation is accomplished via initializing W1 and W2 as: W (i,j) ∼ {
1.0 +N (0.0, ) if i = j, i < R N (0.0, ) otherwise (16)
G.3 TARGETED TRANSFORMATION HYPERPARAMETERS
Both of the targeted transformations have several hyperparameters. We conducted a grid search to optimize these in a cross-validated fashion, on a set of test task switches designed to be solved by one of the targeted transformations (Fig. S5). Each was conducted independently of the dynamic voting controller, and independently of the other transformation. Optimal hyperparameters found in these experiments were fixed for use in the integrated dynamic voting controller, and were not further optimized afterwards.
Action Transformation Hyperparameters We conducted three tests using the stimulus-response paradigm: class reversal (in which the left class becomes the right class and vice-versa), a horizontal rotation of the reward boundaries (such that right becomes up and left becomes down), and a “switch” to the original task (intended to test the identity-preserving component).
In this work, we find that a single, non-activated linear transformation (f in (12)) is optimal for this new state-space embedding, using kb ∗ 2 units, and initialized such that the idendity-preserving transformation weights have σ = 0.01. The learning rate for this transformation was found to be optimal at 0.1.
Reward Map Transformation Hyperparameters We conducted two tests using the stimulusresponse paradigm: a “squeezing” task (where there is no longer any reward dispensed on the lower half of the screen), and a “switch” to the original task (intended to test the identity-preserving component).
In this work, we find the optimal activations in eq. (15) to be f(·) = CReS and g(·) = ReLU, with 4 units in the hidden layer. in the weight initialization scheme was found optimal at 0.001, and an initial bias of 0.01. The optimal learning rate for this transformation was found to be 0.01.
G.4 TRANSFORM ABLATION
A study was conducted to determine the relative benefit of the targeted transformations (Fig. S6), where it was determined that the primary contribution of the dynamic neural controller was in fact the voting mechanism (although the transformations did supplement this as well).
G.5 DEPLOYMENT SCHEME OF TASK MODULES
When cued into task transition, the controller freezes the learnable parameters of the old task-module, and deploys a new unitialized task-module. The controller then initializes the action and reward map transformation networks as described in G.2 on top of the old module. These transformations are also voted on inside the dynamic neural controller at every timestep.
H SWITCHING METRICS
Figure S7 graphically illustrates the metrics used inside the paper to quantify switching performance: RGain and TGain.
a Stimulus-Response
image:
reward map:
Environment Reward Policy Reversal
Task Switch
Training Trials
b
Training Trials
Stimulus-Response
image:
reward map:
Environment Reward Policy Rotation
Task Switch
c
Training Trials
Stimulus-Response
image:
reward map:
Environment Reward Policy Squeeze
Task Switch
Figure S5: Action and reward map transformation switch examples. Three task switching experiments were performed to optimize the hyperparameters of the targeted transformations that augment the dynamic neural voting controller. These switches are also retested in the fully-integrated meta-controller and shown in the original switching result figure. a. Binary stimulus-response class reversals, where the left class becomes the right class, and vice-versa. b. Rotations of the binary stimulus-response reward boundaries. c. A “squeezing” of the binary stimulus-response reward boundaries, where no reward is given on the new task on the bottom half of the screen, regardless of class shown.
R el
at iv
e A
U C
G ai
n
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 −0.1
0.0
0.1
0.2
0.3
0.4
0.5
Figure S6: Targeted controller transform ablation. Relative AUC Gain for the EMS module over the same switching snearios in the paper, but with the targeted transformations ablated.
10 20 25 3015 40
0.5
0.6
0.7
0.8
0.9
1.0
R ew
ar d
5
Training episodes
Δmax
tΔmax
tΔmax
ΔmaxTGain =
AUC( t)RGain = AUC(Bt)
Figure S7: Illustration of switching performance metrics. We quantify the switching performance of the dynamic neural controller and task-modules by two metrics: “relative gain in AUC” (ratio of green to purple shaded regions), and “transfer” gain (difference of reward at T∆max). Relative AUC measures the overall gain relative to scratch, and the transfer gain measures the speed of transfer. Curve shown is the EMS module with Single-Unit voting method evaluated on a switch from a 4-way MTS task with two randomly moving class templates to a 4-way MTS task with four randomly moving templates. | 1. What is the main contribution of the paper regarding multitask scenarios?
2. What are the strengths and weaknesses of the proposed framework for learning elemental tasks and task switching?
3. Do you have any concerns regarding the action selection method and its connection to reward maximization?
4. How does the reviewer assess the appropriateness of the integral in equation 1 and the effectiveness of the maximum variance exploration strategy?
5. What are your opinions on the connections made between psychology, neuroscience, and algorithm proposal in the paper?
6. Are there any issues with the clarity or focus of the paper's content? | Review | Review
The authors propose a kind of framework for learning to solve elemental tasks and then learning task switching in a multitask scenario. The individual tasks are inspired by a number of psychological tasks. Specifically, the authors use a pretrained convnet as raw statespace encoding together with previous actions and learn through stochastic optimization to predict future rewards for different actions. These constitute encapsulated modules for individual tasks. The authors test a number of different ways to construct the state representations as inputs to these module and report results from extensive simulations evaluating them. The policy is obtained through a heuristic, selecting actions with highest reward prediction variance across multiple steps of lookahead. Finally, two variants of networks are presented and evaluated, which have the purpose of selecting the appropriate module when a signal is provided to the system that a new task is starting.
I find it particularly difficult to evaluate this manuscript. The presented simulation results are based on the described system, which is very complex and contains several, non-standard components and heuristic algorithms.
It would be good to motivate the action selection a bit further. E.g., the authors state that actions are sampled proportionally to the reward predictions and assure properties that are not necessarily intuitive, e.g. that a few reward in the future can should be equated to action values. It is also not clear under which conditions the proposed sampling of actions and the voting results in reward maximization. No statements are made on this other that empirically this worked best.
Is the integral in eq. 1 appropriate or should it be a finite sum?
It seems, that the narrowing of the spatial distribution relative to an \epsilon -greedy policy would highly depend on the actual reward landscape, no? Is the maximum variance as well suited for exploration as for exploitation and reward maximization?
What I find a bit worrisome is the ease with which the manuscript switches between “inspiration” from psychology and neuroscience to plausibility of proposing algorithms to reinterpreting aimpoints as “salience” and feature extraction as “physical structure”. This necessarily introduces a number of
Overall, I am not sure what I have learned with this paper. Is this about learning psychological tasks? New exploration policies? Arbitration in mixtures of experts? Or is the goal to engineer a network that can solve tasks that cannot be solved otherwise? I am a bit lost.
Minor points:
“can from” |
ICLR | Title
Modular Continual Learning in a Unified Visual Environment
Abstract
A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly. Here, we describe a modular continual reinforcement learning paradigm inspired by these abilities. We first introduce a visual interaction environment that allows many types of tasks to be unified in a single framework. We then describe a reward map prediction scheme that learns new tasks robustly in the very large state and action spaces required by such an environment. We investigate how properties of module architecture influence efficiency of task learning, showing that a module motif incorporating specific design principles (e.g. early bottlenecks, low-order polynomial nonlinearities, and symmetry) significantly outperforms more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Finally, we present a meta-controller architecture for task switching based on a dynamic neural voting scheme, which allows new modules to use information learned from previouslyseen tasks to substantially improve their own learning efficiency.
INTRODUCTION
In the course of everyday functioning, people are constantly faced with real-world environments in which they are required to shift unpredictably between multiple, sometimes unfamiliar, tasks (Botvinick & Cohen, 2014). They are nonetheless able to flexibly adapt existing decision schemas or build new ones in response to these challenges (Arbib, 1992). How humans support such flexible learning and task switching is largely unknown, both neuroscientifically and algorithmically (Wagner et al., 1998; Cole et al., 2013).
We investigate solving this problem with a neural module approach in which simple, task-specialized decision modules are dynamically allocated on top of a largely-fixed underlying sensory system (Andreas et al., 2015; Hu et al., 2017). The sensory system computes a general-purpose visual representation from which the decision modules read. While this sensory backbone can be large, complex, and learned comparatively slowly with significant amounts of training data, the task modules that deploy information from the base representation must, in contrast, be lightweight, quick to be learned, and easy to switch between. In the case of visually-driven tasks, results from neuroscience and computer vision suggest the role of the fixed general purpose visual representation may be played by the ventral visual stream, modeled as a deep convolutional neural network (Yamins & DiCarlo, 2016; Razavian et al., 2014). However, the algorithmic basis for how to efficiently learn and dynamically deploy visual decision modules remains far from obvious.
In standard supervised learning, it is often assumed that the output space of a problem is prespecified in a manner that just happens to fit the task at hand – e.g. for a classification task, a discrete output with a fixed number of classes might be determined ahead of time, while for a continuous estimation problem, a one-dimensional real-valued target might be chosen instead. This is a very convenient simplification in supervised learning or single-task reinforcement learning contexts, but if one is interested in the learning and deployment of decision structures in a rich environment defining tasks with many different natural output types, this simplification becomes cumbersome.
To go beyond this limitation, we build a unified environment in which many different tasks are naturally embodied. Specifically, we model an agent interacting with a two-dimensional touchscreenlike GUI that we call the TouchStream, in which all tasks (discrete categorization tasks, continuous estimation problems, and many other combinations and variants thereof) can be encoded using a single common and intuitive – albeit large – output space. This choice frees us from having to hand-design or programmatically choose between different output domain spaces, but forces us to confront the core challenge of how a naive agent can quickly and emergently learn the implicit “interfaces” required to solve different tasks.
We then introduce Reward Map Prediction (ReMaP) networks, an algorithm for continual reinforcement learning that is able to discover implicit task-specific interfaces in large action spaces like those of the TouchStream environment. We address two major algorithmic challenges associated with learning ReMaP modules. First, what module architectural motifs allow for efficient task interface learning? We compare several candidate architectures and show that those incorporating certain intuitive design principles (e.g. early visual bottlenecks, low-order polynomial nonlinearities and symmetry-inducing concatenations) significantly outperform more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Second, what system architectures are effective for switching between tasks? We present a meta-controller architecture based on a dynamic neural voting scheme, allowing new modules to use information learned from previously-seen tasks to substantially improve their own learning efficiency.
In § 1 we formalize the TouchStream environment. In § 2, we introduce the ReMaP algorithm. In § 3, we describe and evaluate comparative performance of multiple ReMaP module architectures on a variety of TouchStream tasks. In § 4, we describe the Dynamic Neural Voting meta-controller, and evaluate its ability to efficiently transfer knowledge between ReMaP modules on task switches.
RELATED WORK
Modern deep convolutional neural networks have had significant impact on computer vision and artificial intelligence (Krizhevsky et al., 2012), as well as in the computational neuroscience of vision (Yamins & DiCarlo (2016)). There is a recent but growing literature on convnet-based neural modules, where they have been used for solving compositional visual reasoning tasks (Andreas et al., 2015; Hu et al., 2017). In this work we apply the idea of modules to solving visual learning challenges in a continual learning context. Existing works rely on choosing between a menu of pre-specified module primitives, using different module types to solve subproblems involving specific input-output datatypes, without addressing how these modules’ forms are to be discovered in the first place. In this paper, we show a single generic module architecture is capable of automatically learning to solve a wide variety of different tasks in a unified action/state space, and a simple controller scheme is able to switch between such modules.
Our results are also closely connected with the literature on lifelong (or continual) learning (Kirkpatrick et al., 2016; Rusu et al., 2016). A part of this literature is concerned with learning to solve new tasks without catastrophically forgetting how to solve old ones (Zenke et al., 2017; Kirkpatrick et al., 2016). The use of modules obviates this problem, but instead shifts the hard question to one of how newly-allocated modules can be learned effectively. The continual learning literature also directly addresses knowlege transfer to newly allocated structures (Chen et al., 2015; Rusu et al., 2016; Fernando et al., 2017), but largely addresses how transfer learning can lead to higher performance, rather than addressing how it can improve learning speed. Aside from reward performance, we focus on issues of speed in learning and task switching, motivated by the remarkably efficient adaptability of humans in new task contexts. Existing work in continual learning also largely does not address which specific architecture types learn tasks efficiently, independent of transfer. By focusing first on identifying architectures that achieve high performance quickly on individual tasks (§ 3), our transfer-learning investigation then naturally focuses more on how to efficiently identify when and how to re-use components of these architectures (§ 4). Most of these works also make explicit a priori assumptions about the structure of the tasks to be encoded into the models (e.g. output type, number of classes), rather than address the more general question of emergence of solutions in an embodied case, as we do.
Meta-reinforcement learning approaches such as Wang et al. (2016); Duan et al. (2016), as well as the schema learning ideas of e.g. Arbib (1992); McClelland (2013) typically seek to address the issue of continual learning by having a complex meta-learner extract correlations between tasks over a long timescale. In our context most of the burden of environment learning is placed on the individual modules, so our meta-controller can thus be comparatively light-weight compared to typical meta-reinforcement approaches. Unlike our case, meta-learning has mostly been limited to small state or action spaces. Some recent work in general reinforcement learning (e.g. Ostrovski et al. (2017); Dulac-Arnold et al. (2015)) has addressed the issue of large action spaces, but has not sought to address multitask transfer learning in these large action spaces.
1 THE TOUCHSTREAM ENVIRONMENT
Agents in a real-world environment are exposed to many different implicit tasks, arising without predefined decision structures, and must learn on the fly what the appropriate decision interfaces are
for each situation. Because we are interested in modeling how agents can do this on-the-fly learning, our task environment should mimic the unconstrained nature of the real world. Here, we describe the TouchStream environment, which attempts to do this in a simplified two-dimensional domain.
Our problem setup consists of two components, an “environment” and an “agent,” interacting over an extended temporal sequence (Fig. 1). At each timestep t, the environment emits an RGB image xt of height H and width W , and a scalar reward rt. Conversely, the agent accepts images and rewards as input and chooses an action at in response. The action space A available to the agent consists of a two-dimensional pixel grid {0, . . . ,H − 1}×{0, . . . ,W − 1} ⊂ Z2, of the same height and width as its input image. The environment is equipped with a policy (unknown to the agent) that on each time step computes image xt and reward rt as a function of the history of agent actions {a0, . . . , at−1}, images {x0, . . . , xt−1} and rewards {r0, . . . , rt−1}. In this work, the agent is a neural network, composed of a visual backbone with fixed weights, together with a meta-controller module whose parameters are learned by interaction with the environment. The agent’s goal is to learn to enact a policy that maximizes its reward obtained over time. Unlike an episodic reinforcement learning context, the TouchStream environment is continuous: throughout the course of learning the agent is never signaled when it should reset to some “initial” internal state. However, unlike the traditional continuous learning context of e.g. Sutton & Barto (1998), a TouchStream may implicitly define many different tasks, each of which is associated with its own characteristic reward schedule. The agent experiences a continual stream of tasks, and any implicit association between reward schedule and state reset must be discovered by the agent.
By framing the action space A of the agent as all possible pixel locations and the state space as any arbitrary image, a very wide range of possible tasks are unified in this single framework, at the cost of requiring the agents’ action space to be congruent to its input state space, and thus be quite large. This presents two core efficiency challenges for the agent: on any given task, it must be able to both quickly recognize what the “interface” for the task is, and transfer such knowledge across tasks in a smart way. Both of these goals are complicated by the fact that both the large size of agent’s state and action spaces.
Although we work with modern large-scale computer vision-style datasets and tasks in this work, e.g. ImageNet (Deng et al. (2009)) and MS-COCO (Lin et al. (2014)), we are also inspired by visual psychology and neuroscience, which have pioneered techniques for how controlled visual tasks can be embodied in real reinforcement learning paradigms (Horner et al., 2013; Rajalingham et al., 2015). Especially useful are three classes of task paradigms that span a range of the ways discrete and continuous estimation tasks can be formulated – including Stimulus-Response, Match-To-Sample, and Localization tasks (Fig. 2).
Stimulus-Response Tasks: The Stimulus-Response (SR) paradigm is a common approach to physically embodying discrete categorization tasks (Gaffan & Harrison, 1988). For example, in the simple two-way SR discrimination task shown in Fig. 2a, the agent is rewarded if it touches the left half of the screen after being shown an image of a dog, and the right half after being shown a butterfly. SR tasks can be made more difficult by increasing the number of image classes or the complexity of the reward boundary regions. In our SR experiments, we use images and classes from the ImageNet dataset (Deng et al., 2009).
Match-To-Sample Tasks: The Match-to-Sample (MTS) paradigm is another common approach to assessing visual categorization abilities (Murray & Mishkin, 1998). In the MTS task shown in Fig. 2b, trials consist of a sequence of two image frames – the “sample” screen followed by the “match” screen – in which the agent is expected to remember the object category seen on the sample frame, and then select an onscreen “button” (really, a patch of pixels) on the match screen corresponding to the sample screen category. Unlike SR tasks, MTS tasks require some working memory and more localized spatial control. More complex MTS tasks involve more sophisticated relationships between the sample and match screen. In Fig. 2c, using the MS-COCO object detection challenge dataset (Lin et al., 2014), the sample screen shows an isolated template image indicating one of the 80 MS-COCO classes, while the match screen shows a randomly-drawn scene from the dataset containing at least one instance of the sample-image class. The agent is rewarded if its chosen action is located inside the boundary of an instance (e.g. the agent “pokes inside”) of the correct class. This MS-COCO MTS task is a “hybrid” of categorical and continuous elements, meaning that if phrased as a standard
supervised learning problem, both categorical readout (i.e. class identity) and a continous readout (i.e. object location) would be required.
Localization: Fig. 2d shows a two-step continuous localization task in which the agent is supposed to mark out the bounding box of an object by touching opposite corners on two successive timesteps, with reward proportionate to the Intersection over Union (IoU) value of the predicted bounding box relative to the ground truth bounding box IoU = Area(BGT∩B̂)
Area(BGT∪B̂) . In localization, unlike the SR and
MTS paradigms, the choice made at one timestep constrains the agent’s optimal choice on a future timestep (e.g. picking the upper left corner of the bounding box on the first step contrains the lower right opposite corner to be chosen on the second).
Although these tasks can become arbitrarily complex along certain axes, the tasks presented here require only fixed-length memory and future prediction. That is, each task requires only knowledge of the past kb timesteps, and a perfect solution always exists within kf timesteps from any point. The minimal required values of kb and kf are different across the various tasks in this work. However, in the investigations below, we set these to the maximum required values across tasks, i.e. kb = 1 and kf = 2. Thus, the agent is required to learn for itself when it is safe to ignore information from the past and when it is irrelevant to predict past a certain point in the future.
We will begin by considering a restricted case where the environment runs one semantic task indefinitely, showing how different architectures learn to solve such individual tasks with dramatically different levels of efficiency (§ 2-3). We will then expand to considering the case where the environment’s policy consists of a sequence of tasks with unpredictable transitions between tasks, and exhibit a meta-controller that can cope effectively with this expanded domain (§ 4).
2 REWARD MAP PREDICTION
The TouchStream environment necessarily involves working with large action and state spaces. Methods for handling this situation often focus on reducing the effective size of action/state spaces, either via estimating pseudo-counts of state-action pairs, or by clustering actions (Ostrovski et al., 2017; Dulac-Arnold et al., 2015). Here we take another approach, using a neural network to directly approximate the (image-state modulated) mapping between the action space and reward space, allowing learnable regularities in the state-action interaction to implicitly reduce the large spaces into something manageable by simple choice policies. We introduce an off-policy algorithm for efficient multitask reinforcement learning in large action and state spaces: Reward Map Prediction, or ReMaP.
2.1 REMAP NETWORK ALGORITHM
As with any standard reinforcement learning situation, the agent seeks to learn an optimal policy π = p(at | xt) defining the probability density p over actions given image state xt. The ReMaP algorithm is off-policy, in that π is calculated as a simple fixed function of the estimated reward.
A ReMaP network MΘ is a neural network with parameters Θ, whose inputs are a history over previous timesteps of (i) the agent’s own actions, and (ii) an activation encoding of the agent’s state space; and which explicitly approximates the expected reward map across its action space for some number of future timesteps. Mathematically:
MΘ : [Ψt−kb:t,ht−kb:t−1] 7−→ [ m1t ,m 2 t , . . . ,m kf t ] where kb is the number of previous timesteps considered; kf is the length of future horizon to be considered; Ψt−kb:t is the history [ψ(xt−kb), . . . , ψ(xt)] of state space encodings produced by fixed backbone network ψ(·), ht−kb:t−1 is the history [at−kb . . . , at−1] of previously chosen actions, and each mi ∈ map(A,R) – that is, a map from action space to reward space. The predicted reward maps are constructed by computing the expected reward obtained for a subsample of actions drawn randomly from A:
mjt : at 7→ E [rt+j | at,ht−kb:t−1,Ψt−kb:t] = ∫ R rt+jp(rt+j | at,ht−kb:t−1,Ψt−kb:t). (1)
where rt+j is the predicted reward j steps into the future horizon. Having produced kf reward prediction maps, one for each timestep of its future horizon, the agent needs to determine what
it believes will be the single best action over all the expected reward maps [ m1t ,m 2 t , . . . ,m kf t ] .
The ReMaP algorithm formulates doing so by normalizing the predictions across each of these kf maps into separate probability distributions, and sampling an action from the distribution which has maximum variance. That is, the agent computes its policy π as follows:
π = VarArgmax kf j=1{Dist[Norm[m j t ]]}, (2)
where Norm[m] = m−min
x∈A m(x) (3)
is a normalization that removes the minimum of the map,
Dist[m] = f(m)∫
A f(m(x)) (4)
ensures it is a probability distribution parameterized by functional family f(·), and VarArgmax is an operator which chooses the input with largest variance.
The sampling procedure described in equation (2) uses two complementary ideas to exploit spatial and temporal structure to efficiently explore a large action space. Since rewards in real physical tasks are spatially correlated, the distribution-based sampler in Equation (4) allows for more effective exploration of potentially informative actions than would the single-point estimate of an apparent optimum (e.g. an -greedy policy). Further, in order to reduce uncertainty, the ReMaP algorithm explores timesteps with greatest reward map variance. The VarArgmax function nonlinearly upweights the timeframe with highest variance to exploit the fact that some points in time carry disproportianate relevance for reward outcome, somewhat analagously to how max-pooling operates in convolutional networks. Although any standard action selection strategy can be used in place of the one in (2) (e.g. pseudo -greedy over all kf maps), we have empirically found that this policy is effective at efficiently exploring our large action space.
The parameters Θ of a ReMaP network are learned by gradient descent on the loss of the reward prediction error Θ∗ = argminΘ L [mt(at), rt, ; Θ] with map m j t compared to the true reward rt+j . Only the reward prediction in mt corresponding to the action chosen at timestep t participates in loss calculation and backpropagation of error signals. A minibatch of maps, rewards, and actions is collected over several consecutive inference passes before performing a parameter update.
The ReMaP algorithm is summarized in 1.
Algorithm 1: ReMaP – Reward Map Prediction Initialize ReMaP network M Initialize state and action memory buffers Ψt−kb:t and ht−kb:t−1 for timestep t = 1,T do
Observe xt, encode with state space network ψ(·), and append to state buffer Subsample set of potential action choices at uniformly from A Produce kf expected reward maps of at from eq. (1) Select action according to policy π as in (2) Execute action at in environment, store in action buffer, and receive reward rt Calculate loss for this and previous kf − 1 timesteps if t ≡ 0 mod batch size then
Perform parameter update
Throughout this work, we take our fixed backbone state space encoder to be the VGG-16 convnet, pretrained on ImageNet (Simonyan & Zisserman, 2014). Because the resolution of the input to this network is 224x224 pixels, our action space A = {0, . . . , 223} × {0, . . . , 223}. By default, the functional family f used in the action selection scheme in Eq. (4) is the identity, although on tasks benefiting from high action precision (e.g. Localization or MS-COCO MTS), it is often optimal to sample a low-temperature Boltzmann distribution with f(x) = e−x/T . Reward prediction errors are calculated using the cross-entropy loss (where logits are smooth approximations to the Heaviside function in analogy to eq. (5)).
3 EFFICIENT NEURAL MODULES FOR TASK LEARNING
The main question we seek to address in this section is: what specific neural network structure(s) should be used in ReMaP modules? The key considerations are that such modules (i) should be easy to learn, requiring comparatively few training examples to discover optimal parameters Θ∗, and (ii) easy to learn from, meaning that an agent can quickly build a new module by reusing components of old ones.
Intuitive Example: As an intuition-building example, consider the case of a simple binary StimulusResponse task, as in Fig. 2a (“if you see a dog touch on the right, if a butterfly touch on the left"). One decision module that is a “perfect” reward predictor on this task is expressed analytically as:
M [Ψt](ax, ay) = H(ReLU(WΨt) ·ReLU(ax) + ReLU(−WΨt) ·ReLU(−ax)) (5) whereH is the Heaviside function, ax and ay are the x and y components of the action a ∈ A relative to the center of the screen, and W is a 1 × |Ψt| matrix expressing the class boundary (bias term omitted for clarity). If WΨt is positive (i.e. the image is of a dog) then ax must also be positive (i.e. touch is on the right) to predict positive reward; conversly, if WΨt is negative (i.e. butterfly), ax must be negative (i.e. left touch) to predict reward. If neither of these conditions hold, both terms are equal to zero, so the formula predicts no reward. Since vertical location of the action does not affect reward, ay is not involved in reward calculation on this task.
Equation (5) has three basic ideas embedded in its structure:
• there is an early visual bottleneck, in which the high-dimensional general purpose feature representation Ψt is greatly reduced in dimension (in this case, from the 4096 features of VGG’s FC6 layer, to 1) prior to combination with action space,
• there is a multiplicative interaction between the action vector and (bottlenecked) visual features, and
• there is symmetry, e.g. the first term of the formula is the sign-antisymmetric partner of the second term, reflecting something about the spatial structure of the task.
In the next sections, we show these three principles can be generalized into a parameterized family of networks from which the visual bottleneck (the W parameters), and decision structure (the form of equation (5)) can emerge naturally and efficienty via learning for any given task of interest.
3.1 THE EMS MODULE
In this section we define a generic ReMaP module which is lightweight, encodes all three generic design principles from the “perfect” formula, and uses only a small number of learnable parameters.
Define the concatenated square nonlinearity as
Sq : x 7−→ x⊕ x2
and the concatenated ReLU nonlinearity (Shang et al. (2016)) as
CReLU : x 7−→ ReLU(x)⊕ReLU(−x)
where ⊕ denotes vector concatenation. The CReS nonlinearity is then defined as the composition of CReLU and Sq, e.g.
CReS(x) : x 7−→ ReLU(x)⊕ReLU(−x)⊕ReLU2(x)⊕ReLU2(−x). The CReS nonlinearity introduces multiplicative interactions between its arguments via its Sq component and symmetry via its use of CReLU. Definition. The (n0, n1, . . . , nk)-Early Bottleneck-Multiplicative-Symmetric (EMS) module is the ReMaP module given by
B = CReLU(W0Ψ + b0)
l1 = CReS(W1(B ⊕ a) + b1) li = CReS(Wili−1 + bi) for i > 1
where Wi and bi are learnable parameters, Ψ are features from the fixed visual encoding network, and a is the action vector in A.
The EMS structure builds in each of the three principles described above. The B stage represents the early bottleneck in which visual encoding inputs are bottlenecked to size n0 before being combined with actions, and then performs k CReS stages, introducing multiplicative symmetric interactions between visual features and actions. From this, the “perfect” module definition for the binary SR task in eq. (5) then becomes a special case of a two-layer EMS module. Note that the visual features to be bottlenecked can be from any encoder; in practice, we work with both fully connected and convolutional features of the VGG-16 backbone.
In the experiments that follow, we compare the EMS module to a wide variety of alternative control motifs, in which the early bottleneck, multiplicative, and symmetric features are ablated. Multiplicative nonlinearity and bottleneck ablations use a spectrum of more standard activation functions, including ReLU, tanh, sigmoid, elu (Clevert et al., 2015), and CReLU forms. In late bottleneck (fully-ablated) architectures – which are, effectively, “standard” multi-layer perceptrons (MLPs) – action vectors are concatenated directly to the output of the visual encoder before being passed through subsequent stages. In all, we test 24 distinct architectures. Detailed information on each can be found in the Supplement.
3.2 EXPERIMENTS
We compared each architecture across 12 variants of visual SR, MTS, and localization tasks, using fixed visual encoding features from layer FC6 of VGG-16. Task variants ranged in complexity from simple (e.g. a binary SR task with ImageNet categories) to more challenging (e.g. a many-way ImageNet MTS task with result buttons appearing in varying positions on each trial). The most complex tasks are two variants of localization, either with a single main salient object placed on a complex background (similar to images used in Yamins & DiCarlo (2016)), or complex scenes from MS-COCO (see Fig. 3b). Details of the tasks used in these experiments can be found in the Supplement. Module weights were initialized using a normal distribution with µ = 0.0, σ = 0.01, and optimized using the ADAM algorithm (Kingma & Ba (2014)) with parameters β1 = 0.9, β2 = 0.999 and = 1e−8. Learning rates were optimized on a per-task, per-architecture basis in a cross-validated fashion. For each architecture and task, we ran optimizations from five different initialization seeds to obtain mean and standard error due to initial condition variability. For fully-ablated “late-bottleneck” modules, we measured the performance of modules of three different sizes (small, medium, and large), where the smallest version is equivalent in size to the EMS module, and the medium and large versions are much larger (Table S1).
Emergence of Decision Structures: A key feature of ReMaP modules is that they are able to discover de novo the underlying output domain spaces for a variety of qualitatively distinct tasks (Fig. 3; more examples in Fig. S2). The emergent decision structures are highly interpretable and reflect the true interfaces that the environment implicitly defines. The spatiotemporal patterns of learning are robust across tasks and replicable across initial seedings, and thus might serve as a candidate model of interface use and learning in humans. In general, we observe that the modules typically discover
the underlying “physical structures” needed to operate the task interface before learning the specific decision rules needed to solve the task.
For example, in the case of a discrete MTS categorization task (Fig. 3a), this involves the quick discovery of onscreen “buttons” corresponding to discrete action choices before these buttons are mapped to their semantic meaning. In the case of in the MS-COCO MTS task (Fig. 3b), we observe the initial discovery of high salience object boundaries, and followed by category-specific refinement. It is important to note that the visual backbone was trained on a categorization task, quite distinct from the localization task in MS-COCO MTS. Thus, the module had to learn this very different decision structure, as well as the class boundaries of MS-COCO, from scratch during training.
Efficiency of the EMS module: The efficiency of learning was measured by computing the taskaveraged, normalized area under the learning curve (TA-N-AUC) for each of the 24 modules tested, across all 12 task variants. Fig. 4a-d shows characteristic learning curves for several tasks, summarized in the table in Fig. 4e. Results for all architectures for all tasks are shown in Supplement Figure S1. We find that the EMS module is the most efficient across tasks (0.997 TA-N-AUC). Moreover, the EMS architecture always achieves the highest final reward level on each task.
Increasing ablations of the EMS structure lead to increasingly poor performance, both in terms of learning efficiency and final performance. Ablating the low-order polynomial interaction (replacing Sq with CReLU) had the largest negative effect on performance (0.818 TA-N-AUC), followed in importance by the symmetric structure (0.944 TA-N-AUC). Large fully-ablated models (no bottleneck, using only ReLU activations) performed significantly worse than the smaller EMS module and the single ablations (0.717 TA-N-AUC), but better than the module with neither symmetry nor multiplicative interactions (0.566 TA-N-AUC). Small fully-ablated modules with the same number of parameters as EMS were by far the least efficient (0.403 TA-N-AUC) and oftentimes achieved much lower final reward. In summary, the main conceptual features by which the special-case architecture in eq. (5) solves the binary SR task are both individually helpful, combine usefully, and can be parameterized and efficiently learned for a variety of visual tasks. These properties are critical to achieving effective task learning compared to standard MLP structures.
In a second experiment focusing on localization tasks, we tested an EMS module using convolutional features from the fixed VGG-16 feature encoder, reasoning that localization tasks could benefit from finer spatial feature resolution. We find that using visual features with explicit spatial information
substantially improves task performance and learning efficiency on these tasks (Fig. 5). To our knowledge, our results on MS-COCO are the first demonstrated use of reinforcement learning to achieve instance-level object segmentations. Reward curves (measuring bounding box IoU) in Fig. 5a show little difference between any of the late bottleneck modules at any size. The only models to consistently achieve an IoU above 0.4 are the EMS-like variants, especially with convolutional features. For context, a baseline SVR trained using supervised methods to directly regress bounding boxes using the same VGG features results in an IoU of 0.369.
4 DYNAMIC NEURAL VOTING FOR TASK SWITCHING
So far, we’ve considered the case where the TouchStream consists of only one task. However, agents in real environments are often faced with having to switch between tasks, many of which they may be encountering for the first time. Ideally, such agents would repurpose knowledge from previously learned tasks when it is relevant to a new task.
Formally, we now consider environment policies consisting of sequences of tasks T = {τ1, τ2, ..., τΩ}, each of which may last for an indeterminate period of time. Consider also a set of modulesM, where each module corresponds to a task-specific policy πω (a | x) = p (at | xt, τω). When a new task begins, we cue the agent to allocate a new module MΩ+1 which is added to the set of modulesM. In the learning that follows allocation, the weights in old modules are held fixed while the parameters in the new module MΩ+1 are trained. However, the output of the system is not merely the output of the new module, but instead is a dynamically allocated mixture of pathways through the computation graphs of the old and new modules. This mixture is determined by a meta-controller (Fig. 6). The meta-controller is itself a neural network which learns a dynamic distribution over (parts of) modules to be used in building the composite execution graph. Intuitively, this composite graph is composed of a small number of relevant pathways that mix and match parts of existing modules to solve the new task, potentially in combination with new module components that need to be learned.
4.1 DYNAMIC NEURAL VOTING
We define a meta-controller that assigns weights to each layer in each module inM. Let piω be the weight associated with the ith layer in module ω. These weights are probabilistic on a per layer basis, e.g. piω ≥ 0 and ∑ ω p i ω = 1 and can be interpreted as the probability of the controller selecting the ith layer liω for use in the execution graph, with distribution πi = {piω}. For such an assignment of weights, the composite execution graph defined by the meta-controller is generated by computing the sum of the activations of all the components at layer i weighted by the probabilities piω . These values are then passed on to the next layer where this process repeats. Mathematically, the composite layer at stage i can be expressed as
l̃iM = ∑ ω piωM i ω(l̃ i−1 M ) = Eπi [ M iω(l̃ i−1 M ) ] . (6)
1
1
1
1
1
1
1
where M iω(·) is the operator that computes the ith layer of module ω, and l̃0 := ψ(xt) is the original encoded input state.
The question now is, where do these probabilistic weights come from? The core of our procedure is a dynamic neural voting process in which the controller network learns a Boltzmann distribution over module activations to maximize reward prediction accuracy. This process is performed at each module layer, where the module weightings for a given layer are conditioned on the results of voting at the previous layer. That is,
pi = softmax [ W i (⊕ ω M iω(l̃ i−1 M ) ) + bi ] (7)
where pi = (pi0, p i 1, ..., p i Ω) are the module weights at layer i, ⊕ is concatenation, and W i ∈ R(Ω·L)×Ω is a learnable weight matrix of the controller.
This voting procedure operates in an online fashion, such that the controller is continously learning its meta-policy while the agent is taking actions. As defined, the meta-controller constitutes a fully-differentiable neural network and is learned by gradient descent online.
A useful refinement of the above mechanism involves voting across the units ofM. Specifically, the meta-controller now assigns probabilistic weights pi,jω to neuron n i,j ω (the jth unit in layer i of module ω). In contrast to the layer-voting scheme, the dynamically generated execution graph computed by the meta controller now becomes composite neurons with activations:
ñi,jM = ∑ ω pi,jω M i,j ω (l̃ i−1 M ) = Eπi,j [ M i,jω (l̃ i−1 M ) ] . (8)
which are concatenated to form the composite layer l̃iM. The generalization of equation (7) to the single-unit voting scheme then becomes:
pi,j = softmax [ W i,j (⊕ ω M i,jω (l̃ i−1 M ) ) + bi,j ] (9)
where pi,j = (pi,j0 , p i,j 1 , ..., p i,j Ω ) are the unit-level weights across modules, and W i,j ∈ RΩ×Ω.
Empirically, we find that the initialization schemes of the learnable controller parameters are an important consideration in the design, and that two specialized transformations also contribute slightly to its overall efficiency. For details on these, please refer to the Supplement.
The dynamic neural voting mechanism achieves meta-control through a neural network optimized online via gradient descent while the modules are solving tasks, rather than a genetic algorithm that operates over a longer timescale as in the work of Fernando et al. (2017). Moreover, in contrast to the work of Rusu et al. (2016) the voting mechanism eliminates the need for fully-connected adaptation layers between modules, thus substantially reducing the number of parameters required for transfer.
4.2 SWITCHING EXPERIMENTS
R eu se Fr ac tio
n
10 20 4030 Batch Updates 0
R ew
ar d
From Scratch After Switch
0.6 0.8 1.0
0.6 0.8 1.0
0.2 0.4
Figure 7: Dyanmic Neural Voting quickly corrects for “no-switch” switches. Although a new module is allocated for each task transition, if the new task is identitcal to the original task, the controller quickly learns to reuse the old module components. Top: postswitching learning curve for the EMS module on a binary stimulus-response task, after being trained on the same task. For clarity, only the Layer Voting method is compared against a baseline module trained from scratch. Bottom: fraction of the original module reused over the course of post-switch learning, calculated by averaging the voting weights of each layer in the original module.
“No-switch” switches: Our first experiments tested how the dynamic neural voting mechanism would respond to “no-switch” switches, i.e. ones in which although a switch cue was given and a new module allocated, the environment policy’s task did not actually change (Fig 7). We find that in such cases, performance almost instantly approaches pre-switch levels (e.g. there is very little penalty in attempting an uneccessary switch). Moreover, we find that the weightings the controller applies to the new module is low: in other words, the system recognizes that no new module is needed and acts accordingly by concentrating its weights on the existing module. These results show that, while we formally assume that the agent is cued as when task switches occurs, in theory it could implement a completely autonomous monitoring policy, in which the agent simply runs the allocation procedure if a performance “anomoly” occurs (e.g. a sustained drop in reward). If the system determines that the new module was unneeded, it could simply reallocate the new module for a later task switch. In future work, we plan to implement this policy explicitly.
“Real” switches: We next tested how the dynamic voting controller handled switches in which the environment policy substantially changed after the switching cue. Using both the EMS module and (for control) the large fully-ablated module as described in § 3.2, the dynamic neural voting controller was evaluated on 15 switching experiments using multiple variants of SR and MTS tasks. Specifically, these 15 switches cover a variety of distinct (but not mutually exclusive) switching types including:
• addition of new classes to the dataset (switch indexes 2, 7, 11 in the table of Fig. 8) • replacing the current class set entirely with a new non-overlapping class set (switch ids. 1, 3) • addition of visual variability to a previously less variable task (switch id. 6) • addition of visual interface elements e.g. new buttons (switch id. 8) • transformation of interface elements e.g. screen rotation (switch ids. 12, 13, 14, 15) • transitions between different task paradigms e.g. SR to MTS tasks and vice-versa (switch ids. 4, 5,
9, 10).
Controller hyperparameters were optimized in a cross-validated fashion (see Appendix G.1), and optimizations for three different initialization seeds were run to obtain mean and standard error.
Figures 8a and b show characteristic post-switch learning curves for the EMS module for both the Layer Voting and Single-Unit Voting methods. Additional switching curves can be found in the Supplement. Cumulative reward gains relative to learning from scratch were quantified by Relative
Gain in AUC: RGain = AUC(M
switch)−AUC(M) AUC(M) , where M is the module trained from scratch on
R ew
ar d
R el
at iv
e A
U C
G ai
n
Single-Unit Voting Layer Voting EMS
Single-Unit Voting Layer Voting
None (Large)
1 2 3 4 5 6 7 8 9 10 11 12
10 20 504030 10 20 504030
0.0
0.1
0.2
0.4
0.3
0.5
0.2 0.4 0.6
1.0 0.8
1.2
0.2
0.4
0.6
0.8
EMS Single-Unit Voting Layer Voting
a b
c d
e
Tr an
sf er
G ai
n
0.0
0.1
0.2
0.4
0.3
0.5
1 142 103 84 95 6 7 11 12 13 15
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 13 14 15
1.4
Base Task Switch Task Base Task Switch Task 1. 2-way SR 2-way SR new classes 9. 4-way double-binary SR 4-way 4-shown stationary MTS 2. 2-way SR 4-way double-binary SR 10. 4-way 4-shown stationary MTS t 4-way quadrant SR 3. 2-way stationary MTS 2-way stationary MTS new classes 11. 2-way SR 4-way quadrant SR 4. 2-way SR 2-way stationary MTS 12. 4-way double-binary SR 4-way quadrant SR 5. 2-way stationary MTS 2-way SR 13. 2-way SR class reversal 6. 2-way stationary MTS 2-way vert-motion horiz-flip MTS 14. 2-way SR squeezed map 7. 2-way vert-motion horiz-flip MTS 4-way 2-shown vert-motion MTS 15. 2-way SR 90◦ map rotation 8. 4-way 2-shown vert-motion MTS 4-way 4-shown permuted MTS
Task switching with Dynamic Neural Voting. Post-Switching learning curves for the EMS module on the 4-way Quadrant SR task after learning a. 2-way SR task and b. a 4-way MTS task with 4 match screen class templates. Both the Layer Voting method and Single-Unit Voting method are compared against a baseline module trained on the second task from scratch. Across all twelve task switches, we evaluate the Relative Gain in AUC over baseline (RGain) using both voting methods for c. the EMS module and d. the large-sized fully-ablated late bottleneck MLP. e. Transfer Gain (TGain) metrics are compared for both module types for each of the voting mechanisms. Colors are as in c. (EMS module) and d. (fully-ablated module).
Figure 8
the second task, and Mswitch is the module transferred from an initial task using the dynamic voting controller. We find that the dynamic voting controller allows for rapid positive transfer of both module types across all 15 task switches, and the general Single-Unit voting method is a somewhat better transfer mechanism than the Layer Voting method (Fig. 8c). Both the EMS module and the large fully-ablated module, which was shown to be inefficient on single-task performance in § 3.2, benefit from dynamic neural voting (Fig. 8 d).
EMS modules are more “switchable”: To quantify how fast switching gains are realized, we use Transfer Gain: TGain = ∆maxT∆max , where T∆max = argmax(∆t) is the time where the maximum amount of reward difference between Mswitch and M occurs, and ∆max is the reward difference at that time. Qualitatively, a high score on the Transfer Gain metric indicates that a large amount of relative reward improvement has been achieved in a short amount of time (see Figure S7 for a graphical illustration of the relationship between the RGain and TGain metrics). While both the EMS and large fully-ablated modules have positive Transfer Gain, EMS scores significantly higher on this metric, i.e. is significantly more “switchable” than the large fully-ablated module (Fig. 8e). We hypothesize that this is due to the EMS module being able to achieve high task performance with significantly fewer units than the larger fully-ablated module, making the former easier for the dynamic neural voting controller to operate on.
5 CONCLUSION AND FUTURE DIRECTIONS
In this work, we introduce the TouchStream environment, a continual reinforcement learning framework that unifies a wide variety of spatial decision-making tasks within a single context. We describe a general algorithm (ReMaP) for learning light-weight neural modules that discover implicit task interfaces within this large-action/state-space environment. We show that a particular module architecture (EMS) is able to remain compact while retaining high task performance, and thus is especially suitable for flexible task learning and switching. We also describe a simple but general dynamic task-switching architecture that shows substantial ability to transfer knowledge when modules for new tasks are learned.
A crucial future direction will be to expand insights from the current work into a more complete continual-learning agent. We will need to show that our approach scales to handle dozens or hundreds of task switches in sequence. We will also need to address issues of how the agent determines when to build a new module and how to consolidate modules when appropriate (e.g. when a series of tasks previously understood as separate can be solved by a single smaller structure). It will also be critical to extend our approach to handle visual tasks with longer horizons, such as navigation or game play with extended strategic planning, which will likely require the use of recurrent memory stores as part of the feature encoder.
From an application point of view, we are particularly interested in using techniques like those described here to produce agents that can autonomously discover and operate the interfaces present in many important real-world two-dimensional problem domains, such as on smartphones or the internet (Grossman, 2007). We also expect many of the same spatially-informed techniques that enable our ReMaP/EMS modules to perform well in the 2-D TouchStream environment will also transfer naturally to a three-dimensional context, where autonomous robotics applications (Devin et al., 2016) are very compelling.
SUPPLEMENTARY MATERIAL
A TASK VARIANTS
The EMS module and all ablation controls were evaluated on a suite of 13 stimulus-response, match-to-sample, localization, and MS-COCO MTS variants:
1. 2-way SR - standard binary SR task 2. 4-way double binary SR - four class variant of SR, where each class is assigned either to the right
or left half of the action space 3. 4-way quadrant SR - four class variant of SR, where each class is assigned to only a quadrant of
the action space 4. 2-way stationary MTS - standard binary MTS task with stereotyped and non-moving match screens
5. 2-way stationary horiz-flip MTS - two class variant MTS task where the match templates’ horizontal placement is randomly chosen, but confined within the same vertical plane
6. 2-way stationary vert-motion MTS - two class variant MTS task where the match templates’ vertical position is randomly chosen, but each class is confined to a specific side
7. 2-way stationary vert-motion horiz-flip MTS - two class variant MTS task where the match templates’ positions are completely random
8. 4-way 2-shown MTS - four class variant MTS task where only two class templates are shown on the match screen (appearing with random horizontal location as well)
9. 4-way 2-shown vert-motion MTS - same as above, but with random vertical motion for the templates
10. 4-way 4-shown stationary MTS - four class variant MTS task where all four class templates are shown on the match screen, but with fixed positions.
11. 4-way 4-shown permuted MTS - same as above, but with randomly permuted locations of all match templates
12. Localization - Localization task 13. MS-COCO MTS - 80-way MTS task using the MS-COCO detection challenge dataset, where
match screens are randomly samples scenes from the dataset
B EXPERIMENT DETAILS AND DATASETS
Stimulus-Response Experiment Details: Image categories used are drawn from the Image-Net 2012 ILSVR classification challenge dataset Deng et al. (2009). Four unique object classes are taken from the dataset: Boston Terrier, Monarch Butterfly, Race Car, and Panda Bear. Each class has 1300 unique training instances, and 50 unique validation instances.
Match-To-Sample Experiment Details: Sample screen images drawn from the same Image-Net class set as the Stimulus-Response tasks. One face-centered, unobstructed class instance is also drawn from the Image-Net classification challenge set and used as a match screen template image for that class. Class template images for the match screen were held fixed at 100x100 pixels. For all variants of the MTS task, we keep a six pixel buffer between the edges of the screen and the match images, and a twelve pixel buffer between the adjascent edges of the match images themselves. Variants without vertical motion have the match images vertically centered on the screen.
Localization Experiment Details: The Localization task uses synthetic images containing a single main salient object placed on a complex background (similar to images used in Yamins & DiCarlo (2016); Yamins et al. (2014)). There are a total of 59 unique classes in this dataset. In contrast to other single-class localization datasets (e.g. Image-Net) which are designed to have one large, face-centered, and centrally-focused object instance and for which a trivial policy of “always poke in image corners” could be learned, this synthetic image set offers larger variance in instance scale, position, and rotation so the agent is forced into learning non-trivial policies requiring larger precision in action selection.
MS-COCO MTS Experiment Details This task uses the entire MS-COCO detection challenge dataset Lin et al. (2014). On every timestep, a sample screen chosen from one of the 80 MS-COCO classes. These are constructed to be large, unobstructed, face centered representations of the class. For the match screen, we sample a random scene from MS-COCO containing any number of objects, but containing at least a single instance of the sample class. The agent is rewarded if its action is located inside any instance of the correct class. Both modules use sample actions from a low-temperature Boltzmann policy from eq. (4), which was empirically found to result in more precise reward map prediction.
C MODULES
C.1 UNITS PER LAYER
Table S1 aggregates the number of units per layer for the EMS and ablated modules which was used when conducting single-task and task-switching experiments. Only fully-connected modules’ layer sizes are shown here. For details on the convolutional bottleneck EMS module, please refer to C.2.
Table S1: Number of units per layer for investigated modules
Base-task EMS No symm No Mult
No mult/symm NoneSmall NoneMed NoneLarge
SR 8 8 8 8 8 128 512 MTS 32 32 32 32 32 128 512 LOC 128 128 128 128 128 512 1024
2-way SR
4-way double binary SR
4-way quadrant SR
2-way stationary MTS
2-way vert. motion MTS
2-way horiz. flip MTS
2-way motion/flip MTS
4-way 2-shown MTS
4-way 2-shown vert-motion MTS
4-way 4-shown stationary MTS
4-way 4-shown permuted MTS
Localization
P ar
tia l s
ym m E M
S
N o
sy m m N o sy m m /p at ia l m
ul t
N on
e C
R eL
u (la
rg e)
N o
m ul
t/s ym
m e Lu N o m ul t/s ym m R eL u N on e R eL u (m ed iu m ) N o m ul t N on e eL u (la rg e) N on e R eL u (la rg e) N on e C R eL u (m ed iu m ) N on e eL u (m ed iu m ) N on e C R eL u (s m al l) N on e ta nh (l ar ge ) N on e si g (la rg e) N o m ul t/s ym m ta nh
N on
e eL
u (s
m al
l)
N o
m ul
t/s ym
m s ig N on e R eL u (s m al
l)
N on
e si
g (m
ed iu
m )
N on
e ta
nh (m
ed iu
m )
N on
e si
g (s
m al
l)
N on
e ta
nh (s
m al
l)
Normalized Validation AUC
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
Figure S1: Exhaustive module performance study of the EMS module and 23 ablation control modules, measured as the Area Under the Curve for all SR, MTS, and LOC task variants. Shown is the AUC normalized to the highest performing module in a task. Results in fig. 4 have further averaged this over the vertical task axis, and report only a salient subset of the ablations.
C.2 THE CONVOLUTIONAL-EMS MODULE
This is a "Convolutional Bottleneck" extension of the EMS module shown in the paper, where skip connections link the conv5 and the FC6 representation of the visual backbone. Here, the "scenelevel" representation stored in the FC6 ReMaP memory buffer is tiled spatially to match the present convolution dimensions (here 14x14), and concatenated onto its channel dimension. A series of 1x1 convolutions plays the role of a shallow visual bottleneck, before the activations are vectorized and concatenated with A as input to the CReS layers of the standard EMS module. The results in the paper are shown for a bottleneck consisting of a single tanh and two CReS convolutions, with 128 units each. The Downstream layers use 128 units each as well.
The motivation for the convolutional bottleneck is that lower-level features are useful for complex spatial tasks such as Localization and Object Detection, and hence may result in a more precise policy. By tiling the entire scene-level representation along the convolution layer’s channel dimension, a form of multiplicative template-matching is possible between objects that must be memorized (e.g. MS-COCO MTS templates) and what is inside the present scene.
D EXHAUSTIVE ABLATION STUDY
In all, we investigated 23 distinct ablations on the EMS module, across all twelve task variants outlined in sec A (Fig. S1). Symmetry ablations replace CReS with the activation x 7→ ReLU(x) ⊕ x2 Multiplicative ablations are denoted by specifying the nonlinearity used in place of CReS (where this is one of ReLU, tanh, sigmoid, elu Clevert et al. (2015), or CReLU Shang et al. (2016)). This additionally includes one partial symmetry ablation (denoted “partial symm”) where only the visual bottleneck is symmetric, and one which ablates the ReLU from the “no symm” module (denoted “no symm/partial-mult”).
Table S2: Module learning rates
2-way SR
4-way double binary SR
4-way stationary SR
2-way stationary MTS
2-way vertmotion MTS
2-way horiz flip MTS 2-way motion/flip MTS
4-way 2-shown MTS 4-way 2-shown vertmotion MTS 4-way 4-shown stationary MTS 4-way 4-shown permuted MTS LOC
EMS 10−3 10−3 10−3 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 10−4 Partial symm 10−3 10−3 10−3 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 10−4 No symm 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 2·10−4 10−4 No symm/partial mult 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 2·10−4 10−4 No mult/symm ReLU 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 No mult/symm tanh 10−3 10−3 10−3 10−3 10−4 10−3 10−3 10−3 10−3 10−4 10−4 10−4 No mult/symm sig 10−3 10−3 10−3 10−3 10−4 10−3 10−3 10−3 10−3 10−4 10−4 10−4 No mult/symm eLU 10−3 10−3 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 No mult/symm CReLU 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 None ReLU(small) 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 None ReLU(medium) 10−3 10−3 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None ReLU(large) 10−3 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None tanh(small) 10−4 10−4 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−4 10−4 10−4 None tanh(medium) 10−4 10−4 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−4 10−4 10−4 None tanh(large) 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None sig(small) 10−4 10−4 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−4 10−3 10−4 None sig(medium) 10−4 10−4 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−4 10−4 10−4 None sig(large) 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None eLU(small) 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 None eLU(medium) 10−3 10−3 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−3 10−4 10−4 None eLU(large) 10−4 10−4 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None CReLU(small) 10−3 10−3 10−3 10−3 10−3 10−4 10−3 10−3 10−3 10−3 10−3 10−4 None CReLU(medium) 10−3 10−3 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−3 10−4 10−4 None CReLU(large) 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4
D.1 HYPERPARAMETERS
Learning rates for the ADAM optimizer were chosen on a per-task basis through cross-validation on a grid between [10−4,10−3] for each architecture. Values used in the present study may be seen in Table S2.
E ADDITIONAL MS-COCO REWARD MAPS
Five additional reward map examples for the MS-COCO MTS task are provided in in Figure S2. Examples are plotted over the course of learning.
F ADDITIONAL LEARNING CURVES
F.1 SINGLE-TASK ABLATION EXPERIMENTS
Learning trajectories for seven additional tasks are provided in Figure S3. Modules capable of convergence on a task were run until this was acheived, but AUC values for a given task are calculated at the point in time when the majority of models converge.
F.2 DYNAMIC VOTING CONTROLLER AND EMS MODULE TASK-SWITCHING EXPERIMENTS
Additional trajectories for ten unshown switching curves are provided in Figure S4.
G DYNAMIC VOTING CONTROLLER AUGMENTATIONS
G.1 LEARNABLE PARAMETER INITIALIZATIONS
Here we describe the weight initialization scheme that was found to be optimal for use with the dynamic voting controller. For simplicity, consider the layer-voting mechanism, with learnable
64 128 704 1344 1984
64 128 704 1344 1984
64 704 1984 2624 3904
64 704 1344 2176 3904
Sample Match
64 704 1344 6016 8384
Reward Maps
Training episode (in thousands)
Figure S2: Examples of the emergence of decision interfaces in MSCOCO MTS Reward map predictions over the course of training for 5 different object classes.
2-way SR 4-way double binary SR
2-way stationary MTS 2-way horiz-flip MTS
2-way vert-motion MTS 4-way 2-shown MTS
4-way 4-shown stationary MTS EMS
No symm No mult
None (large) No mult/symm None (medium)
None (small) Training Episodes
R ew
ar d
Figure S3: Additional Single-task performance ablation Learning curves. Seven learning curves shown for task variants not seen in the main text body. Shown are the same ablations as the main text.
R ew
ar d
Training Episodes
2-way SR to 2-way SR new classes 2-way SR to 4-way double binary SR
2-way stationary MTS to 2-way stationary MTS new classes 2-way SR to 2-way stationary MTS
2-way stationary MTS to 2-way SR 2-way stationary MTS to 2-way vert-motion horiz-flip MTS
4-way 2-shown vert-motion MTS to 4-way 4-shown permuted MTS2-way vert-motion horiz-flip MTS to 4-way 2-shown vert-motion MTS
4-way double binary SR to 4-way 4-shown stationary MTS 4-way double binary SR to 4-way quadrant SR
EMS Single-Unit Voting Layer Voting
Figure S4: Additional Switching curves. Ten additional learning curves for unshown task switches in main text. Shown are the both Single-Unit and Layer Voting implementations of the dynamic voting controller with the EMS module. “EMS” denotes a module trained on the second task from scratch.
22
weight matricies W i and biases bi. The intended biasing scheme is achieved through initializing the elements of these parameters to:
W iω ∼ |N (µ0, 0.001)| if i = 1, ω < Ω |N (µ1, 0.001)| if i > 1, ω < Ω |N (0.01, 0.001)| if ω = Ω
(10)
biω = b0 if i = 1, ω < Ω
b1 if i > 1, ω < Ω 0.1 if ω = Ω
(11)
This initialization technique was also generalized for use with the single-unit voting mechanism.
For the switching experiments presented in section § 4.2, we sweep the hyperparameters on a narrow band around the default scheme. The ranges for these are: µ0 ∈ [0.01, 0.005] , b0 ∈ [0.1, 0.01] , µ1 ∈ [0.01, 0.02] , and b1 ∈ [0.1, 0.2, 0.5, 1.0].
G.2 TARGETED TRANSFORMATIONS
Two additional switching mechanisms were added to the controller to augment its ability to switch between taks which are remappings of the action space or reward policy of a preexisting module.
G.2.1 ACTION TRANSFORMATIONS
we note that efficient modules are those which can effectively produce a minimal representation of the interaction between action space A and observation xt. If the agent’s optimal action space shifts to A′ while the remainder of the task context remains fixed, the controller should allow for rapid targeted remapping A 7→ A′. Since we formulate the modules as ReMaP Networks, and A is an input feature basis, we can achieve remappings of this form through a fully-connected transformation:
a′τ = f(Waaτ + b) (12)
where aτ = [ht−kb:t−1, at] is the vector of action histories, and Wa and b embed aτ into new action space A′ using only a small number of learnable parameters. Pseudo Identity-Preserving Transformation In practice, we initialize the parameters in eq. (12) such that the transformation is pseudo identity-preseving, meaning that the representation learned at this level in the original module is not destroyed prior to transfer.
This is done by initializing Wa to be an identity matrix I|aτ | with a small amount of Gaussian noise ∼ N (0.0, σ2) added to break symmetry. b is initialized to be a vector of ones of size |aτ |.
G.2.2 REWARD MAP TRANSFORMATIONS
Each of the kf maps mt(x) reflects the agent’s uncertainty in the environment’s reward policy. If the task context remains stationary, but the environment transitions to new reward scheduleR′ that no longer aligns with the module’s policy π, the controller could to this transition by e.g. containing a mechanism allowing for targeted transformation of m(x) and hence also π.
One complication that arises under ReMaP is that since each task-module learns its optimal action space internally, m(x) are in the basis of R rather than A. Therefore, transformations on the map distribution must also re-encode A before mapping toR′. In this work, we investigate a shallow “adapter” neural network that lives on top of the existing module and mapsR 7→ R′. Its first and second layers are defined by
l1(x) = f(W1[m(x) g(aτ ), aτ ] + b1 (13)
m(x)′ ∝W2l1 + b2 (14)
where g(aτ ) is a similar transformation onA as above, denotes elementwise multiplication,W1 is a learnable matrix embedding into a hidden state, and W2 ∈ R|l1|×|R
′| is a learnable matrix embedding intoR′
Pseudo Identity-Preserving Transformation Similar to the transformation on the action space, we modify the reward-map transformation to be pseudo identity-preserving as well. This is done by modifying eq. (13) such that the original maps are concatenated on to the beginning of the transformation input vector:
l1(x) = f(W1[m(x),m(x) g(aτ ), aτ ] + b1 (15)
The intended map-preserving transformation is accomplished via initializing W1 and W2 as: W (i,j) ∼ {
1.0 +N (0.0, ) if i = j, i < R N (0.0, ) otherwise (16)
G.3 TARGETED TRANSFORMATION HYPERPARAMETERS
Both of the targeted transformations have several hyperparameters. We conducted a grid search to optimize these in a cross-validated fashion, on a set of test task switches designed to be solved by one of the targeted transformations (Fig. S5). Each was conducted independently of the dynamic voting controller, and independently of the other transformation. Optimal hyperparameters found in these experiments were fixed for use in the integrated dynamic voting controller, and were not further optimized afterwards.
Action Transformation Hyperparameters We conducted three tests using the stimulus-response paradigm: class reversal (in which the left class becomes the right class and vice-versa), a horizontal rotation of the reward boundaries (such that right becomes up and left becomes down), and a “switch” to the original task (intended to test the identity-preserving component).
In this work, we find that a single, non-activated linear transformation (f in (12)) is optimal for this new state-space embedding, using kb ∗ 2 units, and initialized such that the idendity-preserving transformation weights have σ = 0.01. The learning rate for this transformation was found to be optimal at 0.1.
Reward Map Transformation Hyperparameters We conducted two tests using the stimulusresponse paradigm: a “squeezing” task (where there is no longer any reward dispensed on the lower half of the screen), and a “switch” to the original task (intended to test the identity-preserving component).
In this work, we find the optimal activations in eq. (15) to be f(·) = CReS and g(·) = ReLU, with 4 units in the hidden layer. in the weight initialization scheme was found optimal at 0.001, and an initial bias of 0.01. The optimal learning rate for this transformation was found to be 0.01.
G.4 TRANSFORM ABLATION
A study was conducted to determine the relative benefit of the targeted transformations (Fig. S6), where it was determined that the primary contribution of the dynamic neural controller was in fact the voting mechanism (although the transformations did supplement this as well).
G.5 DEPLOYMENT SCHEME OF TASK MODULES
When cued into task transition, the controller freezes the learnable parameters of the old task-module, and deploys a new unitialized task-module. The controller then initializes the action and reward map transformation networks as described in G.2 on top of the old module. These transformations are also voted on inside the dynamic neural controller at every timestep.
H SWITCHING METRICS
Figure S7 graphically illustrates the metrics used inside the paper to quantify switching performance: RGain and TGain.
a Stimulus-Response
image:
reward map:
Environment Reward Policy Reversal
Task Switch
Training Trials
b
Training Trials
Stimulus-Response
image:
reward map:
Environment Reward Policy Rotation
Task Switch
c
Training Trials
Stimulus-Response
image:
reward map:
Environment Reward Policy Squeeze
Task Switch
Figure S5: Action and reward map transformation switch examples. Three task switching experiments were performed to optimize the hyperparameters of the targeted transformations that augment the dynamic neural voting controller. These switches are also retested in the fully-integrated meta-controller and shown in the original switching result figure. a. Binary stimulus-response class reversals, where the left class becomes the right class, and vice-versa. b. Rotations of the binary stimulus-response reward boundaries. c. A “squeezing” of the binary stimulus-response reward boundaries, where no reward is given on the new task on the bottom half of the screen, regardless of class shown.
R el
at iv
e A
U C
G ai
n
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 −0.1
0.0
0.1
0.2
0.3
0.4
0.5
Figure S6: Targeted controller transform ablation. Relative AUC Gain for the EMS module over the same switching snearios in the paper, but with the targeted transformations ablated.
10 20 25 3015 40
0.5
0.6
0.7
0.8
0.9
1.0
R ew
ar d
5
Training episodes
Δmax
tΔmax
tΔmax
ΔmaxTGain =
AUC( t)RGain = AUC(Bt)
Figure S7: Illustration of switching performance metrics. We quantify the switching performance of the dynamic neural controller and task-modules by two metrics: “relative gain in AUC” (ratio of green to purple shaded regions), and “transfer” gain (difference of reward at T∆max). Relative AUC measures the overall gain relative to scratch, and the transfer gain measures the speed of transfer. Curve shown is the EMS module with Single-Unit voting method evaluated on a switch from a 4-way MTS task with two randomly moving class templates to a 4-way MTS task with four randomly moving templates. | 1. What are the main contributions and ideas presented in the paper regarding continual learning?
2. What are the strengths and weaknesses of the proposed algorithm, particularly in its ability to solve tasks and switch between them?
3. How does the reviewer assess the length and content of the paper, and what suggestions do they provide for improvement?
4. What questions does the reviewer raise regarding the biological plausibility and time component aspects of the proposed approach?
5. Are there any minor issues or typos identified by the reviewer in the paper's presentation? | Review | Review
The paper comprises several ideas to study the continual learning problem. First, they show an ad-hoc designed environment, namely the Touchstream environment, in which both inputs and actions are represented in a huge space: as it happens with humans – for example when they are using a touch screen – the resolution of the input space, i.e. the images, is at least big as the resolution of the action space, i.e. where you click on the screen. This environment introduces the interesting problem of a direct mapping between input and actions. Second, they introduce an algorithm to solve this mapping problem in the Touchstream space. Specifically, the ReMaP algorithm learns to solve typical neuroscience tasks, by optimizing a computational module that facilitates the mapping in this space. The Early Bottleneck Multiplicative Symmetric (EMS) module extends the types of computation you might need to solve the tasks in the Touchstream space. Third, the authors introduce another module to learn how to switch from task to task in a dynamical way.
The main concern with this paper is about its length. While the conference does not provide any limits in terms of number of pages, the 13 pages for the main text plus other 8 for the supplementary material is probably too much. I am wondering if the paper just contains too much information for a single conference publication.
As a consequence, either the learning of the single tasks and the task switching experiments could have been addressed with further details. In case of single task learning, the tasks are relatively simple where k_beta = 1 (memory) and k_f = 2 (prediction) are sufficient to solve the tasks. It would have been interesting to evaluate the algorithm on more complex tasks, and how these parameters affect the learning. In case of the task switching a more interesting question would be how the network learns a sequence of tasks (more than one switch).
Overall, the work is interesting, well described and the results are consistent.
Some questions:
- the paper starts with clear inspiration from Neuroscience, but nothing has been said on the biological plausibility of the ReMap, EMS and the recurrent neural voting;
- animals usually learn in continuous-time, thus such tasks usually involve a time component, while this work is designed in a time step framework; could the author argument on that? (starting from Doya 2000)
- specifically, the MTS and the Localization tasks involve working memory, thus a comparison with other working memory with reinforcement learning would make more sense that different degree of ablated modules. (for example Bakker 2002)
Minor:
- Fig 6 does not have letters
- TouchStream is sometimes written as Touchstream
Ref.
Doya, K. (2000). Reinforcement learning in continuous time and space. Neural Computation, 12(1), 219–245.
Bakker, B. (2002). Reinforcement Learning with Long Short-Term Memory. In T. G. Dietterich, S. Becker, & Z. Ghahramani (Eds.), (pp. 1475–1482). Presented at the Advances in Neural Information Processing Systems 14, MIT Press. |
ICLR | Title
Modular Continual Learning in a Unified Visual Environment
Abstract
A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly. Here, we describe a modular continual reinforcement learning paradigm inspired by these abilities. We first introduce a visual interaction environment that allows many types of tasks to be unified in a single framework. We then describe a reward map prediction scheme that learns new tasks robustly in the very large state and action spaces required by such an environment. We investigate how properties of module architecture influence efficiency of task learning, showing that a module motif incorporating specific design principles (e.g. early bottlenecks, low-order polynomial nonlinearities, and symmetry) significantly outperforms more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Finally, we present a meta-controller architecture for task switching based on a dynamic neural voting scheme, which allows new modules to use information learned from previouslyseen tasks to substantially improve their own learning efficiency.
INTRODUCTION
In the course of everyday functioning, people are constantly faced with real-world environments in which they are required to shift unpredictably between multiple, sometimes unfamiliar, tasks (Botvinick & Cohen, 2014). They are nonetheless able to flexibly adapt existing decision schemas or build new ones in response to these challenges (Arbib, 1992). How humans support such flexible learning and task switching is largely unknown, both neuroscientifically and algorithmically (Wagner et al., 1998; Cole et al., 2013).
We investigate solving this problem with a neural module approach in which simple, task-specialized decision modules are dynamically allocated on top of a largely-fixed underlying sensory system (Andreas et al., 2015; Hu et al., 2017). The sensory system computes a general-purpose visual representation from which the decision modules read. While this sensory backbone can be large, complex, and learned comparatively slowly with significant amounts of training data, the task modules that deploy information from the base representation must, in contrast, be lightweight, quick to be learned, and easy to switch between. In the case of visually-driven tasks, results from neuroscience and computer vision suggest the role of the fixed general purpose visual representation may be played by the ventral visual stream, modeled as a deep convolutional neural network (Yamins & DiCarlo, 2016; Razavian et al., 2014). However, the algorithmic basis for how to efficiently learn and dynamically deploy visual decision modules remains far from obvious.
In standard supervised learning, it is often assumed that the output space of a problem is prespecified in a manner that just happens to fit the task at hand – e.g. for a classification task, a discrete output with a fixed number of classes might be determined ahead of time, while for a continuous estimation problem, a one-dimensional real-valued target might be chosen instead. This is a very convenient simplification in supervised learning or single-task reinforcement learning contexts, but if one is interested in the learning and deployment of decision structures in a rich environment defining tasks with many different natural output types, this simplification becomes cumbersome.
To go beyond this limitation, we build a unified environment in which many different tasks are naturally embodied. Specifically, we model an agent interacting with a two-dimensional touchscreenlike GUI that we call the TouchStream, in which all tasks (discrete categorization tasks, continuous estimation problems, and many other combinations and variants thereof) can be encoded using a single common and intuitive – albeit large – output space. This choice frees us from having to hand-design or programmatically choose between different output domain spaces, but forces us to confront the core challenge of how a naive agent can quickly and emergently learn the implicit “interfaces” required to solve different tasks.
We then introduce Reward Map Prediction (ReMaP) networks, an algorithm for continual reinforcement learning that is able to discover implicit task-specific interfaces in large action spaces like those of the TouchStream environment. We address two major algorithmic challenges associated with learning ReMaP modules. First, what module architectural motifs allow for efficient task interface learning? We compare several candidate architectures and show that those incorporating certain intuitive design principles (e.g. early visual bottlenecks, low-order polynomial nonlinearities and symmetry-inducing concatenations) significantly outperform more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Second, what system architectures are effective for switching between tasks? We present a meta-controller architecture based on a dynamic neural voting scheme, allowing new modules to use information learned from previously-seen tasks to substantially improve their own learning efficiency.
In § 1 we formalize the TouchStream environment. In § 2, we introduce the ReMaP algorithm. In § 3, we describe and evaluate comparative performance of multiple ReMaP module architectures on a variety of TouchStream tasks. In § 4, we describe the Dynamic Neural Voting meta-controller, and evaluate its ability to efficiently transfer knowledge between ReMaP modules on task switches.
RELATED WORK
Modern deep convolutional neural networks have had significant impact on computer vision and artificial intelligence (Krizhevsky et al., 2012), as well as in the computational neuroscience of vision (Yamins & DiCarlo (2016)). There is a recent but growing literature on convnet-based neural modules, where they have been used for solving compositional visual reasoning tasks (Andreas et al., 2015; Hu et al., 2017). In this work we apply the idea of modules to solving visual learning challenges in a continual learning context. Existing works rely on choosing between a menu of pre-specified module primitives, using different module types to solve subproblems involving specific input-output datatypes, without addressing how these modules’ forms are to be discovered in the first place. In this paper, we show a single generic module architecture is capable of automatically learning to solve a wide variety of different tasks in a unified action/state space, and a simple controller scheme is able to switch between such modules.
Our results are also closely connected with the literature on lifelong (or continual) learning (Kirkpatrick et al., 2016; Rusu et al., 2016). A part of this literature is concerned with learning to solve new tasks without catastrophically forgetting how to solve old ones (Zenke et al., 2017; Kirkpatrick et al., 2016). The use of modules obviates this problem, but instead shifts the hard question to one of how newly-allocated modules can be learned effectively. The continual learning literature also directly addresses knowlege transfer to newly allocated structures (Chen et al., 2015; Rusu et al., 2016; Fernando et al., 2017), but largely addresses how transfer learning can lead to higher performance, rather than addressing how it can improve learning speed. Aside from reward performance, we focus on issues of speed in learning and task switching, motivated by the remarkably efficient adaptability of humans in new task contexts. Existing work in continual learning also largely does not address which specific architecture types learn tasks efficiently, independent of transfer. By focusing first on identifying architectures that achieve high performance quickly on individual tasks (§ 3), our transfer-learning investigation then naturally focuses more on how to efficiently identify when and how to re-use components of these architectures (§ 4). Most of these works also make explicit a priori assumptions about the structure of the tasks to be encoded into the models (e.g. output type, number of classes), rather than address the more general question of emergence of solutions in an embodied case, as we do.
Meta-reinforcement learning approaches such as Wang et al. (2016); Duan et al. (2016), as well as the schema learning ideas of e.g. Arbib (1992); McClelland (2013) typically seek to address the issue of continual learning by having a complex meta-learner extract correlations between tasks over a long timescale. In our context most of the burden of environment learning is placed on the individual modules, so our meta-controller can thus be comparatively light-weight compared to typical meta-reinforcement approaches. Unlike our case, meta-learning has mostly been limited to small state or action spaces. Some recent work in general reinforcement learning (e.g. Ostrovski et al. (2017); Dulac-Arnold et al. (2015)) has addressed the issue of large action spaces, but has not sought to address multitask transfer learning in these large action spaces.
1 THE TOUCHSTREAM ENVIRONMENT
Agents in a real-world environment are exposed to many different implicit tasks, arising without predefined decision structures, and must learn on the fly what the appropriate decision interfaces are
for each situation. Because we are interested in modeling how agents can do this on-the-fly learning, our task environment should mimic the unconstrained nature of the real world. Here, we describe the TouchStream environment, which attempts to do this in a simplified two-dimensional domain.
Our problem setup consists of two components, an “environment” and an “agent,” interacting over an extended temporal sequence (Fig. 1). At each timestep t, the environment emits an RGB image xt of height H and width W , and a scalar reward rt. Conversely, the agent accepts images and rewards as input and chooses an action at in response. The action space A available to the agent consists of a two-dimensional pixel grid {0, . . . ,H − 1}×{0, . . . ,W − 1} ⊂ Z2, of the same height and width as its input image. The environment is equipped with a policy (unknown to the agent) that on each time step computes image xt and reward rt as a function of the history of agent actions {a0, . . . , at−1}, images {x0, . . . , xt−1} and rewards {r0, . . . , rt−1}. In this work, the agent is a neural network, composed of a visual backbone with fixed weights, together with a meta-controller module whose parameters are learned by interaction with the environment. The agent’s goal is to learn to enact a policy that maximizes its reward obtained over time. Unlike an episodic reinforcement learning context, the TouchStream environment is continuous: throughout the course of learning the agent is never signaled when it should reset to some “initial” internal state. However, unlike the traditional continuous learning context of e.g. Sutton & Barto (1998), a TouchStream may implicitly define many different tasks, each of which is associated with its own characteristic reward schedule. The agent experiences a continual stream of tasks, and any implicit association between reward schedule and state reset must be discovered by the agent.
By framing the action space A of the agent as all possible pixel locations and the state space as any arbitrary image, a very wide range of possible tasks are unified in this single framework, at the cost of requiring the agents’ action space to be congruent to its input state space, and thus be quite large. This presents two core efficiency challenges for the agent: on any given task, it must be able to both quickly recognize what the “interface” for the task is, and transfer such knowledge across tasks in a smart way. Both of these goals are complicated by the fact that both the large size of agent’s state and action spaces.
Although we work with modern large-scale computer vision-style datasets and tasks in this work, e.g. ImageNet (Deng et al. (2009)) and MS-COCO (Lin et al. (2014)), we are also inspired by visual psychology and neuroscience, which have pioneered techniques for how controlled visual tasks can be embodied in real reinforcement learning paradigms (Horner et al., 2013; Rajalingham et al., 2015). Especially useful are three classes of task paradigms that span a range of the ways discrete and continuous estimation tasks can be formulated – including Stimulus-Response, Match-To-Sample, and Localization tasks (Fig. 2).
Stimulus-Response Tasks: The Stimulus-Response (SR) paradigm is a common approach to physically embodying discrete categorization tasks (Gaffan & Harrison, 1988). For example, in the simple two-way SR discrimination task shown in Fig. 2a, the agent is rewarded if it touches the left half of the screen after being shown an image of a dog, and the right half after being shown a butterfly. SR tasks can be made more difficult by increasing the number of image classes or the complexity of the reward boundary regions. In our SR experiments, we use images and classes from the ImageNet dataset (Deng et al., 2009).
Match-To-Sample Tasks: The Match-to-Sample (MTS) paradigm is another common approach to assessing visual categorization abilities (Murray & Mishkin, 1998). In the MTS task shown in Fig. 2b, trials consist of a sequence of two image frames – the “sample” screen followed by the “match” screen – in which the agent is expected to remember the object category seen on the sample frame, and then select an onscreen “button” (really, a patch of pixels) on the match screen corresponding to the sample screen category. Unlike SR tasks, MTS tasks require some working memory and more localized spatial control. More complex MTS tasks involve more sophisticated relationships between the sample and match screen. In Fig. 2c, using the MS-COCO object detection challenge dataset (Lin et al., 2014), the sample screen shows an isolated template image indicating one of the 80 MS-COCO classes, while the match screen shows a randomly-drawn scene from the dataset containing at least one instance of the sample-image class. The agent is rewarded if its chosen action is located inside the boundary of an instance (e.g. the agent “pokes inside”) of the correct class. This MS-COCO MTS task is a “hybrid” of categorical and continuous elements, meaning that if phrased as a standard
supervised learning problem, both categorical readout (i.e. class identity) and a continous readout (i.e. object location) would be required.
Localization: Fig. 2d shows a two-step continuous localization task in which the agent is supposed to mark out the bounding box of an object by touching opposite corners on two successive timesteps, with reward proportionate to the Intersection over Union (IoU) value of the predicted bounding box relative to the ground truth bounding box IoU = Area(BGT∩B̂)
Area(BGT∪B̂) . In localization, unlike the SR and
MTS paradigms, the choice made at one timestep constrains the agent’s optimal choice on a future timestep (e.g. picking the upper left corner of the bounding box on the first step contrains the lower right opposite corner to be chosen on the second).
Although these tasks can become arbitrarily complex along certain axes, the tasks presented here require only fixed-length memory and future prediction. That is, each task requires only knowledge of the past kb timesteps, and a perfect solution always exists within kf timesteps from any point. The minimal required values of kb and kf are different across the various tasks in this work. However, in the investigations below, we set these to the maximum required values across tasks, i.e. kb = 1 and kf = 2. Thus, the agent is required to learn for itself when it is safe to ignore information from the past and when it is irrelevant to predict past a certain point in the future.
We will begin by considering a restricted case where the environment runs one semantic task indefinitely, showing how different architectures learn to solve such individual tasks with dramatically different levels of efficiency (§ 2-3). We will then expand to considering the case where the environment’s policy consists of a sequence of tasks with unpredictable transitions between tasks, and exhibit a meta-controller that can cope effectively with this expanded domain (§ 4).
2 REWARD MAP PREDICTION
The TouchStream environment necessarily involves working with large action and state spaces. Methods for handling this situation often focus on reducing the effective size of action/state spaces, either via estimating pseudo-counts of state-action pairs, or by clustering actions (Ostrovski et al., 2017; Dulac-Arnold et al., 2015). Here we take another approach, using a neural network to directly approximate the (image-state modulated) mapping between the action space and reward space, allowing learnable regularities in the state-action interaction to implicitly reduce the large spaces into something manageable by simple choice policies. We introduce an off-policy algorithm for efficient multitask reinforcement learning in large action and state spaces: Reward Map Prediction, or ReMaP.
2.1 REMAP NETWORK ALGORITHM
As with any standard reinforcement learning situation, the agent seeks to learn an optimal policy π = p(at | xt) defining the probability density p over actions given image state xt. The ReMaP algorithm is off-policy, in that π is calculated as a simple fixed function of the estimated reward.
A ReMaP network MΘ is a neural network with parameters Θ, whose inputs are a history over previous timesteps of (i) the agent’s own actions, and (ii) an activation encoding of the agent’s state space; and which explicitly approximates the expected reward map across its action space for some number of future timesteps. Mathematically:
MΘ : [Ψt−kb:t,ht−kb:t−1] 7−→ [ m1t ,m 2 t , . . . ,m kf t ] where kb is the number of previous timesteps considered; kf is the length of future horizon to be considered; Ψt−kb:t is the history [ψ(xt−kb), . . . , ψ(xt)] of state space encodings produced by fixed backbone network ψ(·), ht−kb:t−1 is the history [at−kb . . . , at−1] of previously chosen actions, and each mi ∈ map(A,R) – that is, a map from action space to reward space. The predicted reward maps are constructed by computing the expected reward obtained for a subsample of actions drawn randomly from A:
mjt : at 7→ E [rt+j | at,ht−kb:t−1,Ψt−kb:t] = ∫ R rt+jp(rt+j | at,ht−kb:t−1,Ψt−kb:t). (1)
where rt+j is the predicted reward j steps into the future horizon. Having produced kf reward prediction maps, one for each timestep of its future horizon, the agent needs to determine what
it believes will be the single best action over all the expected reward maps [ m1t ,m 2 t , . . . ,m kf t ] .
The ReMaP algorithm formulates doing so by normalizing the predictions across each of these kf maps into separate probability distributions, and sampling an action from the distribution which has maximum variance. That is, the agent computes its policy π as follows:
π = VarArgmax kf j=1{Dist[Norm[m j t ]]}, (2)
where Norm[m] = m−min
x∈A m(x) (3)
is a normalization that removes the minimum of the map,
Dist[m] = f(m)∫
A f(m(x)) (4)
ensures it is a probability distribution parameterized by functional family f(·), and VarArgmax is an operator which chooses the input with largest variance.
The sampling procedure described in equation (2) uses two complementary ideas to exploit spatial and temporal structure to efficiently explore a large action space. Since rewards in real physical tasks are spatially correlated, the distribution-based sampler in Equation (4) allows for more effective exploration of potentially informative actions than would the single-point estimate of an apparent optimum (e.g. an -greedy policy). Further, in order to reduce uncertainty, the ReMaP algorithm explores timesteps with greatest reward map variance. The VarArgmax function nonlinearly upweights the timeframe with highest variance to exploit the fact that some points in time carry disproportianate relevance for reward outcome, somewhat analagously to how max-pooling operates in convolutional networks. Although any standard action selection strategy can be used in place of the one in (2) (e.g. pseudo -greedy over all kf maps), we have empirically found that this policy is effective at efficiently exploring our large action space.
The parameters Θ of a ReMaP network are learned by gradient descent on the loss of the reward prediction error Θ∗ = argminΘ L [mt(at), rt, ; Θ] with map m j t compared to the true reward rt+j . Only the reward prediction in mt corresponding to the action chosen at timestep t participates in loss calculation and backpropagation of error signals. A minibatch of maps, rewards, and actions is collected over several consecutive inference passes before performing a parameter update.
The ReMaP algorithm is summarized in 1.
Algorithm 1: ReMaP – Reward Map Prediction Initialize ReMaP network M Initialize state and action memory buffers Ψt−kb:t and ht−kb:t−1 for timestep t = 1,T do
Observe xt, encode with state space network ψ(·), and append to state buffer Subsample set of potential action choices at uniformly from A Produce kf expected reward maps of at from eq. (1) Select action according to policy π as in (2) Execute action at in environment, store in action buffer, and receive reward rt Calculate loss for this and previous kf − 1 timesteps if t ≡ 0 mod batch size then
Perform parameter update
Throughout this work, we take our fixed backbone state space encoder to be the VGG-16 convnet, pretrained on ImageNet (Simonyan & Zisserman, 2014). Because the resolution of the input to this network is 224x224 pixels, our action space A = {0, . . . , 223} × {0, . . . , 223}. By default, the functional family f used in the action selection scheme in Eq. (4) is the identity, although on tasks benefiting from high action precision (e.g. Localization or MS-COCO MTS), it is often optimal to sample a low-temperature Boltzmann distribution with f(x) = e−x/T . Reward prediction errors are calculated using the cross-entropy loss (where logits are smooth approximations to the Heaviside function in analogy to eq. (5)).
3 EFFICIENT NEURAL MODULES FOR TASK LEARNING
The main question we seek to address in this section is: what specific neural network structure(s) should be used in ReMaP modules? The key considerations are that such modules (i) should be easy to learn, requiring comparatively few training examples to discover optimal parameters Θ∗, and (ii) easy to learn from, meaning that an agent can quickly build a new module by reusing components of old ones.
Intuitive Example: As an intuition-building example, consider the case of a simple binary StimulusResponse task, as in Fig. 2a (“if you see a dog touch on the right, if a butterfly touch on the left"). One decision module that is a “perfect” reward predictor on this task is expressed analytically as:
M [Ψt](ax, ay) = H(ReLU(WΨt) ·ReLU(ax) + ReLU(−WΨt) ·ReLU(−ax)) (5) whereH is the Heaviside function, ax and ay are the x and y components of the action a ∈ A relative to the center of the screen, and W is a 1 × |Ψt| matrix expressing the class boundary (bias term omitted for clarity). If WΨt is positive (i.e. the image is of a dog) then ax must also be positive (i.e. touch is on the right) to predict positive reward; conversly, if WΨt is negative (i.e. butterfly), ax must be negative (i.e. left touch) to predict reward. If neither of these conditions hold, both terms are equal to zero, so the formula predicts no reward. Since vertical location of the action does not affect reward, ay is not involved in reward calculation on this task.
Equation (5) has three basic ideas embedded in its structure:
• there is an early visual bottleneck, in which the high-dimensional general purpose feature representation Ψt is greatly reduced in dimension (in this case, from the 4096 features of VGG’s FC6 layer, to 1) prior to combination with action space,
• there is a multiplicative interaction between the action vector and (bottlenecked) visual features, and
• there is symmetry, e.g. the first term of the formula is the sign-antisymmetric partner of the second term, reflecting something about the spatial structure of the task.
In the next sections, we show these three principles can be generalized into a parameterized family of networks from which the visual bottleneck (the W parameters), and decision structure (the form of equation (5)) can emerge naturally and efficienty via learning for any given task of interest.
3.1 THE EMS MODULE
In this section we define a generic ReMaP module which is lightweight, encodes all three generic design principles from the “perfect” formula, and uses only a small number of learnable parameters.
Define the concatenated square nonlinearity as
Sq : x 7−→ x⊕ x2
and the concatenated ReLU nonlinearity (Shang et al. (2016)) as
CReLU : x 7−→ ReLU(x)⊕ReLU(−x)
where ⊕ denotes vector concatenation. The CReS nonlinearity is then defined as the composition of CReLU and Sq, e.g.
CReS(x) : x 7−→ ReLU(x)⊕ReLU(−x)⊕ReLU2(x)⊕ReLU2(−x). The CReS nonlinearity introduces multiplicative interactions between its arguments via its Sq component and symmetry via its use of CReLU. Definition. The (n0, n1, . . . , nk)-Early Bottleneck-Multiplicative-Symmetric (EMS) module is the ReMaP module given by
B = CReLU(W0Ψ + b0)
l1 = CReS(W1(B ⊕ a) + b1) li = CReS(Wili−1 + bi) for i > 1
where Wi and bi are learnable parameters, Ψ are features from the fixed visual encoding network, and a is the action vector in A.
The EMS structure builds in each of the three principles described above. The B stage represents the early bottleneck in which visual encoding inputs are bottlenecked to size n0 before being combined with actions, and then performs k CReS stages, introducing multiplicative symmetric interactions between visual features and actions. From this, the “perfect” module definition for the binary SR task in eq. (5) then becomes a special case of a two-layer EMS module. Note that the visual features to be bottlenecked can be from any encoder; in practice, we work with both fully connected and convolutional features of the VGG-16 backbone.
In the experiments that follow, we compare the EMS module to a wide variety of alternative control motifs, in which the early bottleneck, multiplicative, and symmetric features are ablated. Multiplicative nonlinearity and bottleneck ablations use a spectrum of more standard activation functions, including ReLU, tanh, sigmoid, elu (Clevert et al., 2015), and CReLU forms. In late bottleneck (fully-ablated) architectures – which are, effectively, “standard” multi-layer perceptrons (MLPs) – action vectors are concatenated directly to the output of the visual encoder before being passed through subsequent stages. In all, we test 24 distinct architectures. Detailed information on each can be found in the Supplement.
3.2 EXPERIMENTS
We compared each architecture across 12 variants of visual SR, MTS, and localization tasks, using fixed visual encoding features from layer FC6 of VGG-16. Task variants ranged in complexity from simple (e.g. a binary SR task with ImageNet categories) to more challenging (e.g. a many-way ImageNet MTS task with result buttons appearing in varying positions on each trial). The most complex tasks are two variants of localization, either with a single main salient object placed on a complex background (similar to images used in Yamins & DiCarlo (2016)), or complex scenes from MS-COCO (see Fig. 3b). Details of the tasks used in these experiments can be found in the Supplement. Module weights were initialized using a normal distribution with µ = 0.0, σ = 0.01, and optimized using the ADAM algorithm (Kingma & Ba (2014)) with parameters β1 = 0.9, β2 = 0.999 and = 1e−8. Learning rates were optimized on a per-task, per-architecture basis in a cross-validated fashion. For each architecture and task, we ran optimizations from five different initialization seeds to obtain mean and standard error due to initial condition variability. For fully-ablated “late-bottleneck” modules, we measured the performance of modules of three different sizes (small, medium, and large), where the smallest version is equivalent in size to the EMS module, and the medium and large versions are much larger (Table S1).
Emergence of Decision Structures: A key feature of ReMaP modules is that they are able to discover de novo the underlying output domain spaces for a variety of qualitatively distinct tasks (Fig. 3; more examples in Fig. S2). The emergent decision structures are highly interpretable and reflect the true interfaces that the environment implicitly defines. The spatiotemporal patterns of learning are robust across tasks and replicable across initial seedings, and thus might serve as a candidate model of interface use and learning in humans. In general, we observe that the modules typically discover
the underlying “physical structures” needed to operate the task interface before learning the specific decision rules needed to solve the task.
For example, in the case of a discrete MTS categorization task (Fig. 3a), this involves the quick discovery of onscreen “buttons” corresponding to discrete action choices before these buttons are mapped to their semantic meaning. In the case of in the MS-COCO MTS task (Fig. 3b), we observe the initial discovery of high salience object boundaries, and followed by category-specific refinement. It is important to note that the visual backbone was trained on a categorization task, quite distinct from the localization task in MS-COCO MTS. Thus, the module had to learn this very different decision structure, as well as the class boundaries of MS-COCO, from scratch during training.
Efficiency of the EMS module: The efficiency of learning was measured by computing the taskaveraged, normalized area under the learning curve (TA-N-AUC) for each of the 24 modules tested, across all 12 task variants. Fig. 4a-d shows characteristic learning curves for several tasks, summarized in the table in Fig. 4e. Results for all architectures for all tasks are shown in Supplement Figure S1. We find that the EMS module is the most efficient across tasks (0.997 TA-N-AUC). Moreover, the EMS architecture always achieves the highest final reward level on each task.
Increasing ablations of the EMS structure lead to increasingly poor performance, both in terms of learning efficiency and final performance. Ablating the low-order polynomial interaction (replacing Sq with CReLU) had the largest negative effect on performance (0.818 TA-N-AUC), followed in importance by the symmetric structure (0.944 TA-N-AUC). Large fully-ablated models (no bottleneck, using only ReLU activations) performed significantly worse than the smaller EMS module and the single ablations (0.717 TA-N-AUC), but better than the module with neither symmetry nor multiplicative interactions (0.566 TA-N-AUC). Small fully-ablated modules with the same number of parameters as EMS were by far the least efficient (0.403 TA-N-AUC) and oftentimes achieved much lower final reward. In summary, the main conceptual features by which the special-case architecture in eq. (5) solves the binary SR task are both individually helpful, combine usefully, and can be parameterized and efficiently learned for a variety of visual tasks. These properties are critical to achieving effective task learning compared to standard MLP structures.
In a second experiment focusing on localization tasks, we tested an EMS module using convolutional features from the fixed VGG-16 feature encoder, reasoning that localization tasks could benefit from finer spatial feature resolution. We find that using visual features with explicit spatial information
substantially improves task performance and learning efficiency on these tasks (Fig. 5). To our knowledge, our results on MS-COCO are the first demonstrated use of reinforcement learning to achieve instance-level object segmentations. Reward curves (measuring bounding box IoU) in Fig. 5a show little difference between any of the late bottleneck modules at any size. The only models to consistently achieve an IoU above 0.4 are the EMS-like variants, especially with convolutional features. For context, a baseline SVR trained using supervised methods to directly regress bounding boxes using the same VGG features results in an IoU of 0.369.
4 DYNAMIC NEURAL VOTING FOR TASK SWITCHING
So far, we’ve considered the case where the TouchStream consists of only one task. However, agents in real environments are often faced with having to switch between tasks, many of which they may be encountering for the first time. Ideally, such agents would repurpose knowledge from previously learned tasks when it is relevant to a new task.
Formally, we now consider environment policies consisting of sequences of tasks T = {τ1, τ2, ..., τΩ}, each of which may last for an indeterminate period of time. Consider also a set of modulesM, where each module corresponds to a task-specific policy πω (a | x) = p (at | xt, τω). When a new task begins, we cue the agent to allocate a new module MΩ+1 which is added to the set of modulesM. In the learning that follows allocation, the weights in old modules are held fixed while the parameters in the new module MΩ+1 are trained. However, the output of the system is not merely the output of the new module, but instead is a dynamically allocated mixture of pathways through the computation graphs of the old and new modules. This mixture is determined by a meta-controller (Fig. 6). The meta-controller is itself a neural network which learns a dynamic distribution over (parts of) modules to be used in building the composite execution graph. Intuitively, this composite graph is composed of a small number of relevant pathways that mix and match parts of existing modules to solve the new task, potentially in combination with new module components that need to be learned.
4.1 DYNAMIC NEURAL VOTING
We define a meta-controller that assigns weights to each layer in each module inM. Let piω be the weight associated with the ith layer in module ω. These weights are probabilistic on a per layer basis, e.g. piω ≥ 0 and ∑ ω p i ω = 1 and can be interpreted as the probability of the controller selecting the ith layer liω for use in the execution graph, with distribution πi = {piω}. For such an assignment of weights, the composite execution graph defined by the meta-controller is generated by computing the sum of the activations of all the components at layer i weighted by the probabilities piω . These values are then passed on to the next layer where this process repeats. Mathematically, the composite layer at stage i can be expressed as
l̃iM = ∑ ω piωM i ω(l̃ i−1 M ) = Eπi [ M iω(l̃ i−1 M ) ] . (6)
1
1
1
1
1
1
1
where M iω(·) is the operator that computes the ith layer of module ω, and l̃0 := ψ(xt) is the original encoded input state.
The question now is, where do these probabilistic weights come from? The core of our procedure is a dynamic neural voting process in which the controller network learns a Boltzmann distribution over module activations to maximize reward prediction accuracy. This process is performed at each module layer, where the module weightings for a given layer are conditioned on the results of voting at the previous layer. That is,
pi = softmax [ W i (⊕ ω M iω(l̃ i−1 M ) ) + bi ] (7)
where pi = (pi0, p i 1, ..., p i Ω) are the module weights at layer i, ⊕ is concatenation, and W i ∈ R(Ω·L)×Ω is a learnable weight matrix of the controller.
This voting procedure operates in an online fashion, such that the controller is continously learning its meta-policy while the agent is taking actions. As defined, the meta-controller constitutes a fully-differentiable neural network and is learned by gradient descent online.
A useful refinement of the above mechanism involves voting across the units ofM. Specifically, the meta-controller now assigns probabilistic weights pi,jω to neuron n i,j ω (the jth unit in layer i of module ω). In contrast to the layer-voting scheme, the dynamically generated execution graph computed by the meta controller now becomes composite neurons with activations:
ñi,jM = ∑ ω pi,jω M i,j ω (l̃ i−1 M ) = Eπi,j [ M i,jω (l̃ i−1 M ) ] . (8)
which are concatenated to form the composite layer l̃iM. The generalization of equation (7) to the single-unit voting scheme then becomes:
pi,j = softmax [ W i,j (⊕ ω M i,jω (l̃ i−1 M ) ) + bi,j ] (9)
where pi,j = (pi,j0 , p i,j 1 , ..., p i,j Ω ) are the unit-level weights across modules, and W i,j ∈ RΩ×Ω.
Empirically, we find that the initialization schemes of the learnable controller parameters are an important consideration in the design, and that two specialized transformations also contribute slightly to its overall efficiency. For details on these, please refer to the Supplement.
The dynamic neural voting mechanism achieves meta-control through a neural network optimized online via gradient descent while the modules are solving tasks, rather than a genetic algorithm that operates over a longer timescale as in the work of Fernando et al. (2017). Moreover, in contrast to the work of Rusu et al. (2016) the voting mechanism eliminates the need for fully-connected adaptation layers between modules, thus substantially reducing the number of parameters required for transfer.
4.2 SWITCHING EXPERIMENTS
R eu se Fr ac tio
n
10 20 4030 Batch Updates 0
R ew
ar d
From Scratch After Switch
0.6 0.8 1.0
0.6 0.8 1.0
0.2 0.4
Figure 7: Dyanmic Neural Voting quickly corrects for “no-switch” switches. Although a new module is allocated for each task transition, if the new task is identitcal to the original task, the controller quickly learns to reuse the old module components. Top: postswitching learning curve for the EMS module on a binary stimulus-response task, after being trained on the same task. For clarity, only the Layer Voting method is compared against a baseline module trained from scratch. Bottom: fraction of the original module reused over the course of post-switch learning, calculated by averaging the voting weights of each layer in the original module.
“No-switch” switches: Our first experiments tested how the dynamic neural voting mechanism would respond to “no-switch” switches, i.e. ones in which although a switch cue was given and a new module allocated, the environment policy’s task did not actually change (Fig 7). We find that in such cases, performance almost instantly approaches pre-switch levels (e.g. there is very little penalty in attempting an uneccessary switch). Moreover, we find that the weightings the controller applies to the new module is low: in other words, the system recognizes that no new module is needed and acts accordingly by concentrating its weights on the existing module. These results show that, while we formally assume that the agent is cued as when task switches occurs, in theory it could implement a completely autonomous monitoring policy, in which the agent simply runs the allocation procedure if a performance “anomoly” occurs (e.g. a sustained drop in reward). If the system determines that the new module was unneeded, it could simply reallocate the new module for a later task switch. In future work, we plan to implement this policy explicitly.
“Real” switches: We next tested how the dynamic voting controller handled switches in which the environment policy substantially changed after the switching cue. Using both the EMS module and (for control) the large fully-ablated module as described in § 3.2, the dynamic neural voting controller was evaluated on 15 switching experiments using multiple variants of SR and MTS tasks. Specifically, these 15 switches cover a variety of distinct (but not mutually exclusive) switching types including:
• addition of new classes to the dataset (switch indexes 2, 7, 11 in the table of Fig. 8) • replacing the current class set entirely with a new non-overlapping class set (switch ids. 1, 3) • addition of visual variability to a previously less variable task (switch id. 6) • addition of visual interface elements e.g. new buttons (switch id. 8) • transformation of interface elements e.g. screen rotation (switch ids. 12, 13, 14, 15) • transitions between different task paradigms e.g. SR to MTS tasks and vice-versa (switch ids. 4, 5,
9, 10).
Controller hyperparameters were optimized in a cross-validated fashion (see Appendix G.1), and optimizations for three different initialization seeds were run to obtain mean and standard error.
Figures 8a and b show characteristic post-switch learning curves for the EMS module for both the Layer Voting and Single-Unit Voting methods. Additional switching curves can be found in the Supplement. Cumulative reward gains relative to learning from scratch were quantified by Relative
Gain in AUC: RGain = AUC(M
switch)−AUC(M) AUC(M) , where M is the module trained from scratch on
R ew
ar d
R el
at iv
e A
U C
G ai
n
Single-Unit Voting Layer Voting EMS
Single-Unit Voting Layer Voting
None (Large)
1 2 3 4 5 6 7 8 9 10 11 12
10 20 504030 10 20 504030
0.0
0.1
0.2
0.4
0.3
0.5
0.2 0.4 0.6
1.0 0.8
1.2
0.2
0.4
0.6
0.8
EMS Single-Unit Voting Layer Voting
a b
c d
e
Tr an
sf er
G ai
n
0.0
0.1
0.2
0.4
0.3
0.5
1 142 103 84 95 6 7 11 12 13 15
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 13 14 15
1.4
Base Task Switch Task Base Task Switch Task 1. 2-way SR 2-way SR new classes 9. 4-way double-binary SR 4-way 4-shown stationary MTS 2. 2-way SR 4-way double-binary SR 10. 4-way 4-shown stationary MTS t 4-way quadrant SR 3. 2-way stationary MTS 2-way stationary MTS new classes 11. 2-way SR 4-way quadrant SR 4. 2-way SR 2-way stationary MTS 12. 4-way double-binary SR 4-way quadrant SR 5. 2-way stationary MTS 2-way SR 13. 2-way SR class reversal 6. 2-way stationary MTS 2-way vert-motion horiz-flip MTS 14. 2-way SR squeezed map 7. 2-way vert-motion horiz-flip MTS 4-way 2-shown vert-motion MTS 15. 2-way SR 90◦ map rotation 8. 4-way 2-shown vert-motion MTS 4-way 4-shown permuted MTS
Task switching with Dynamic Neural Voting. Post-Switching learning curves for the EMS module on the 4-way Quadrant SR task after learning a. 2-way SR task and b. a 4-way MTS task with 4 match screen class templates. Both the Layer Voting method and Single-Unit Voting method are compared against a baseline module trained on the second task from scratch. Across all twelve task switches, we evaluate the Relative Gain in AUC over baseline (RGain) using both voting methods for c. the EMS module and d. the large-sized fully-ablated late bottleneck MLP. e. Transfer Gain (TGain) metrics are compared for both module types for each of the voting mechanisms. Colors are as in c. (EMS module) and d. (fully-ablated module).
Figure 8
the second task, and Mswitch is the module transferred from an initial task using the dynamic voting controller. We find that the dynamic voting controller allows for rapid positive transfer of both module types across all 15 task switches, and the general Single-Unit voting method is a somewhat better transfer mechanism than the Layer Voting method (Fig. 8c). Both the EMS module and the large fully-ablated module, which was shown to be inefficient on single-task performance in § 3.2, benefit from dynamic neural voting (Fig. 8 d).
EMS modules are more “switchable”: To quantify how fast switching gains are realized, we use Transfer Gain: TGain = ∆maxT∆max , where T∆max = argmax(∆t) is the time where the maximum amount of reward difference between Mswitch and M occurs, and ∆max is the reward difference at that time. Qualitatively, a high score on the Transfer Gain metric indicates that a large amount of relative reward improvement has been achieved in a short amount of time (see Figure S7 for a graphical illustration of the relationship between the RGain and TGain metrics). While both the EMS and large fully-ablated modules have positive Transfer Gain, EMS scores significantly higher on this metric, i.e. is significantly more “switchable” than the large fully-ablated module (Fig. 8e). We hypothesize that this is due to the EMS module being able to achieve high task performance with significantly fewer units than the larger fully-ablated module, making the former easier for the dynamic neural voting controller to operate on.
5 CONCLUSION AND FUTURE DIRECTIONS
In this work, we introduce the TouchStream environment, a continual reinforcement learning framework that unifies a wide variety of spatial decision-making tasks within a single context. We describe a general algorithm (ReMaP) for learning light-weight neural modules that discover implicit task interfaces within this large-action/state-space environment. We show that a particular module architecture (EMS) is able to remain compact while retaining high task performance, and thus is especially suitable for flexible task learning and switching. We also describe a simple but general dynamic task-switching architecture that shows substantial ability to transfer knowledge when modules for new tasks are learned.
A crucial future direction will be to expand insights from the current work into a more complete continual-learning agent. We will need to show that our approach scales to handle dozens or hundreds of task switches in sequence. We will also need to address issues of how the agent determines when to build a new module and how to consolidate modules when appropriate (e.g. when a series of tasks previously understood as separate can be solved by a single smaller structure). It will also be critical to extend our approach to handle visual tasks with longer horizons, such as navigation or game play with extended strategic planning, which will likely require the use of recurrent memory stores as part of the feature encoder.
From an application point of view, we are particularly interested in using techniques like those described here to produce agents that can autonomously discover and operate the interfaces present in many important real-world two-dimensional problem domains, such as on smartphones or the internet (Grossman, 2007). We also expect many of the same spatially-informed techniques that enable our ReMaP/EMS modules to perform well in the 2-D TouchStream environment will also transfer naturally to a three-dimensional context, where autonomous robotics applications (Devin et al., 2016) are very compelling.
SUPPLEMENTARY MATERIAL
A TASK VARIANTS
The EMS module and all ablation controls were evaluated on a suite of 13 stimulus-response, match-to-sample, localization, and MS-COCO MTS variants:
1. 2-way SR - standard binary SR task 2. 4-way double binary SR - four class variant of SR, where each class is assigned either to the right
or left half of the action space 3. 4-way quadrant SR - four class variant of SR, where each class is assigned to only a quadrant of
the action space 4. 2-way stationary MTS - standard binary MTS task with stereotyped and non-moving match screens
5. 2-way stationary horiz-flip MTS - two class variant MTS task where the match templates’ horizontal placement is randomly chosen, but confined within the same vertical plane
6. 2-way stationary vert-motion MTS - two class variant MTS task where the match templates’ vertical position is randomly chosen, but each class is confined to a specific side
7. 2-way stationary vert-motion horiz-flip MTS - two class variant MTS task where the match templates’ positions are completely random
8. 4-way 2-shown MTS - four class variant MTS task where only two class templates are shown on the match screen (appearing with random horizontal location as well)
9. 4-way 2-shown vert-motion MTS - same as above, but with random vertical motion for the templates
10. 4-way 4-shown stationary MTS - four class variant MTS task where all four class templates are shown on the match screen, but with fixed positions.
11. 4-way 4-shown permuted MTS - same as above, but with randomly permuted locations of all match templates
12. Localization - Localization task 13. MS-COCO MTS - 80-way MTS task using the MS-COCO detection challenge dataset, where
match screens are randomly samples scenes from the dataset
B EXPERIMENT DETAILS AND DATASETS
Stimulus-Response Experiment Details: Image categories used are drawn from the Image-Net 2012 ILSVR classification challenge dataset Deng et al. (2009). Four unique object classes are taken from the dataset: Boston Terrier, Monarch Butterfly, Race Car, and Panda Bear. Each class has 1300 unique training instances, and 50 unique validation instances.
Match-To-Sample Experiment Details: Sample screen images drawn from the same Image-Net class set as the Stimulus-Response tasks. One face-centered, unobstructed class instance is also drawn from the Image-Net classification challenge set and used as a match screen template image for that class. Class template images for the match screen were held fixed at 100x100 pixels. For all variants of the MTS task, we keep a six pixel buffer between the edges of the screen and the match images, and a twelve pixel buffer between the adjascent edges of the match images themselves. Variants without vertical motion have the match images vertically centered on the screen.
Localization Experiment Details: The Localization task uses synthetic images containing a single main salient object placed on a complex background (similar to images used in Yamins & DiCarlo (2016); Yamins et al. (2014)). There are a total of 59 unique classes in this dataset. In contrast to other single-class localization datasets (e.g. Image-Net) which are designed to have one large, face-centered, and centrally-focused object instance and for which a trivial policy of “always poke in image corners” could be learned, this synthetic image set offers larger variance in instance scale, position, and rotation so the agent is forced into learning non-trivial policies requiring larger precision in action selection.
MS-COCO MTS Experiment Details This task uses the entire MS-COCO detection challenge dataset Lin et al. (2014). On every timestep, a sample screen chosen from one of the 80 MS-COCO classes. These are constructed to be large, unobstructed, face centered representations of the class. For the match screen, we sample a random scene from MS-COCO containing any number of objects, but containing at least a single instance of the sample class. The agent is rewarded if its action is located inside any instance of the correct class. Both modules use sample actions from a low-temperature Boltzmann policy from eq. (4), which was empirically found to result in more precise reward map prediction.
C MODULES
C.1 UNITS PER LAYER
Table S1 aggregates the number of units per layer for the EMS and ablated modules which was used when conducting single-task and task-switching experiments. Only fully-connected modules’ layer sizes are shown here. For details on the convolutional bottleneck EMS module, please refer to C.2.
Table S1: Number of units per layer for investigated modules
Base-task EMS No symm No Mult
No mult/symm NoneSmall NoneMed NoneLarge
SR 8 8 8 8 8 128 512 MTS 32 32 32 32 32 128 512 LOC 128 128 128 128 128 512 1024
2-way SR
4-way double binary SR
4-way quadrant SR
2-way stationary MTS
2-way vert. motion MTS
2-way horiz. flip MTS
2-way motion/flip MTS
4-way 2-shown MTS
4-way 2-shown vert-motion MTS
4-way 4-shown stationary MTS
4-way 4-shown permuted MTS
Localization
P ar
tia l s
ym m E M
S
N o
sy m m N o sy m m /p at ia l m
ul t
N on
e C
R eL
u (la
rg e)
N o
m ul
t/s ym
m e Lu N o m ul t/s ym m R eL u N on e R eL u (m ed iu m ) N o m ul t N on e eL u (la rg e) N on e R eL u (la rg e) N on e C R eL u (m ed iu m ) N on e eL u (m ed iu m ) N on e C R eL u (s m al l) N on e ta nh (l ar ge ) N on e si g (la rg e) N o m ul t/s ym m ta nh
N on
e eL
u (s
m al
l)
N o
m ul
t/s ym
m s ig N on e R eL u (s m al
l)
N on
e si
g (m
ed iu
m )
N on
e ta
nh (m
ed iu
m )
N on
e si
g (s
m al
l)
N on
e ta
nh (s
m al
l)
Normalized Validation AUC
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
Figure S1: Exhaustive module performance study of the EMS module and 23 ablation control modules, measured as the Area Under the Curve for all SR, MTS, and LOC task variants. Shown is the AUC normalized to the highest performing module in a task. Results in fig. 4 have further averaged this over the vertical task axis, and report only a salient subset of the ablations.
C.2 THE CONVOLUTIONAL-EMS MODULE
This is a "Convolutional Bottleneck" extension of the EMS module shown in the paper, where skip connections link the conv5 and the FC6 representation of the visual backbone. Here, the "scenelevel" representation stored in the FC6 ReMaP memory buffer is tiled spatially to match the present convolution dimensions (here 14x14), and concatenated onto its channel dimension. A series of 1x1 convolutions plays the role of a shallow visual bottleneck, before the activations are vectorized and concatenated with A as input to the CReS layers of the standard EMS module. The results in the paper are shown for a bottleneck consisting of a single tanh and two CReS convolutions, with 128 units each. The Downstream layers use 128 units each as well.
The motivation for the convolutional bottleneck is that lower-level features are useful for complex spatial tasks such as Localization and Object Detection, and hence may result in a more precise policy. By tiling the entire scene-level representation along the convolution layer’s channel dimension, a form of multiplicative template-matching is possible between objects that must be memorized (e.g. MS-COCO MTS templates) and what is inside the present scene.
D EXHAUSTIVE ABLATION STUDY
In all, we investigated 23 distinct ablations on the EMS module, across all twelve task variants outlined in sec A (Fig. S1). Symmetry ablations replace CReS with the activation x 7→ ReLU(x) ⊕ x2 Multiplicative ablations are denoted by specifying the nonlinearity used in place of CReS (where this is one of ReLU, tanh, sigmoid, elu Clevert et al. (2015), or CReLU Shang et al. (2016)). This additionally includes one partial symmetry ablation (denoted “partial symm”) where only the visual bottleneck is symmetric, and one which ablates the ReLU from the “no symm” module (denoted “no symm/partial-mult”).
Table S2: Module learning rates
2-way SR
4-way double binary SR
4-way stationary SR
2-way stationary MTS
2-way vertmotion MTS
2-way horiz flip MTS 2-way motion/flip MTS
4-way 2-shown MTS 4-way 2-shown vertmotion MTS 4-way 4-shown stationary MTS 4-way 4-shown permuted MTS LOC
EMS 10−3 10−3 10−3 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 10−4 Partial symm 10−3 10−3 10−3 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 5·10−4 10−4 No symm 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 2·10−4 10−4 No symm/partial mult 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 2·10−4 10−4 No mult/symm ReLU 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 No mult/symm tanh 10−3 10−3 10−3 10−3 10−4 10−3 10−3 10−3 10−3 10−4 10−4 10−4 No mult/symm sig 10−3 10−3 10−3 10−3 10−4 10−3 10−3 10−3 10−3 10−4 10−4 10−4 No mult/symm eLU 10−3 10−3 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 No mult/symm CReLU 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 None ReLU(small) 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 None ReLU(medium) 10−3 10−3 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None ReLU(large) 10−3 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None tanh(small) 10−4 10−4 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−4 10−4 10−4 None tanh(medium) 10−4 10−4 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−4 10−4 10−4 None tanh(large) 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None sig(small) 10−4 10−4 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−4 10−3 10−4 None sig(medium) 10−4 10−4 10−4 10−3 10−3 10−3 10−3 10−3 10−3 10−4 10−4 10−4 None sig(large) 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None eLU(small) 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−3 10−4 None eLU(medium) 10−3 10−3 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−3 10−4 10−4 None eLU(large) 10−4 10−4 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 None CReLU(small) 10−3 10−3 10−3 10−3 10−3 10−4 10−3 10−3 10−3 10−3 10−3 10−4 None CReLU(medium) 10−3 10−3 10−3 10−4 10−4 10−4 10−4 10−4 10−4 10−3 10−4 10−4 None CReLU(large) 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4 10−4
D.1 HYPERPARAMETERS
Learning rates for the ADAM optimizer were chosen on a per-task basis through cross-validation on a grid between [10−4,10−3] for each architecture. Values used in the present study may be seen in Table S2.
E ADDITIONAL MS-COCO REWARD MAPS
Five additional reward map examples for the MS-COCO MTS task are provided in in Figure S2. Examples are plotted over the course of learning.
F ADDITIONAL LEARNING CURVES
F.1 SINGLE-TASK ABLATION EXPERIMENTS
Learning trajectories for seven additional tasks are provided in Figure S3. Modules capable of convergence on a task were run until this was acheived, but AUC values for a given task are calculated at the point in time when the majority of models converge.
F.2 DYNAMIC VOTING CONTROLLER AND EMS MODULE TASK-SWITCHING EXPERIMENTS
Additional trajectories for ten unshown switching curves are provided in Figure S4.
G DYNAMIC VOTING CONTROLLER AUGMENTATIONS
G.1 LEARNABLE PARAMETER INITIALIZATIONS
Here we describe the weight initialization scheme that was found to be optimal for use with the dynamic voting controller. For simplicity, consider the layer-voting mechanism, with learnable
64 128 704 1344 1984
64 128 704 1344 1984
64 704 1984 2624 3904
64 704 1344 2176 3904
Sample Match
64 704 1344 6016 8384
Reward Maps
Training episode (in thousands)
Figure S2: Examples of the emergence of decision interfaces in MSCOCO MTS Reward map predictions over the course of training for 5 different object classes.
2-way SR 4-way double binary SR
2-way stationary MTS 2-way horiz-flip MTS
2-way vert-motion MTS 4-way 2-shown MTS
4-way 4-shown stationary MTS EMS
No symm No mult
None (large) No mult/symm None (medium)
None (small) Training Episodes
R ew
ar d
Figure S3: Additional Single-task performance ablation Learning curves. Seven learning curves shown for task variants not seen in the main text body. Shown are the same ablations as the main text.
R ew
ar d
Training Episodes
2-way SR to 2-way SR new classes 2-way SR to 4-way double binary SR
2-way stationary MTS to 2-way stationary MTS new classes 2-way SR to 2-way stationary MTS
2-way stationary MTS to 2-way SR 2-way stationary MTS to 2-way vert-motion horiz-flip MTS
4-way 2-shown vert-motion MTS to 4-way 4-shown permuted MTS2-way vert-motion horiz-flip MTS to 4-way 2-shown vert-motion MTS
4-way double binary SR to 4-way 4-shown stationary MTS 4-way double binary SR to 4-way quadrant SR
EMS Single-Unit Voting Layer Voting
Figure S4: Additional Switching curves. Ten additional learning curves for unshown task switches in main text. Shown are the both Single-Unit and Layer Voting implementations of the dynamic voting controller with the EMS module. “EMS” denotes a module trained on the second task from scratch.
22
weight matricies W i and biases bi. The intended biasing scheme is achieved through initializing the elements of these parameters to:
W iω ∼ |N (µ0, 0.001)| if i = 1, ω < Ω |N (µ1, 0.001)| if i > 1, ω < Ω |N (0.01, 0.001)| if ω = Ω
(10)
biω = b0 if i = 1, ω < Ω
b1 if i > 1, ω < Ω 0.1 if ω = Ω
(11)
This initialization technique was also generalized for use with the single-unit voting mechanism.
For the switching experiments presented in section § 4.2, we sweep the hyperparameters on a narrow band around the default scheme. The ranges for these are: µ0 ∈ [0.01, 0.005] , b0 ∈ [0.1, 0.01] , µ1 ∈ [0.01, 0.02] , and b1 ∈ [0.1, 0.2, 0.5, 1.0].
G.2 TARGETED TRANSFORMATIONS
Two additional switching mechanisms were added to the controller to augment its ability to switch between taks which are remappings of the action space or reward policy of a preexisting module.
G.2.1 ACTION TRANSFORMATIONS
we note that efficient modules are those which can effectively produce a minimal representation of the interaction between action space A and observation xt. If the agent’s optimal action space shifts to A′ while the remainder of the task context remains fixed, the controller should allow for rapid targeted remapping A 7→ A′. Since we formulate the modules as ReMaP Networks, and A is an input feature basis, we can achieve remappings of this form through a fully-connected transformation:
a′τ = f(Waaτ + b) (12)
where aτ = [ht−kb:t−1, at] is the vector of action histories, and Wa and b embed aτ into new action space A′ using only a small number of learnable parameters. Pseudo Identity-Preserving Transformation In practice, we initialize the parameters in eq. (12) such that the transformation is pseudo identity-preseving, meaning that the representation learned at this level in the original module is not destroyed prior to transfer.
This is done by initializing Wa to be an identity matrix I|aτ | with a small amount of Gaussian noise ∼ N (0.0, σ2) added to break symmetry. b is initialized to be a vector of ones of size |aτ |.
G.2.2 REWARD MAP TRANSFORMATIONS
Each of the kf maps mt(x) reflects the agent’s uncertainty in the environment’s reward policy. If the task context remains stationary, but the environment transitions to new reward scheduleR′ that no longer aligns with the module’s policy π, the controller could to this transition by e.g. containing a mechanism allowing for targeted transformation of m(x) and hence also π.
One complication that arises under ReMaP is that since each task-module learns its optimal action space internally, m(x) are in the basis of R rather than A. Therefore, transformations on the map distribution must also re-encode A before mapping toR′. In this work, we investigate a shallow “adapter” neural network that lives on top of the existing module and mapsR 7→ R′. Its first and second layers are defined by
l1(x) = f(W1[m(x) g(aτ ), aτ ] + b1 (13)
m(x)′ ∝W2l1 + b2 (14)
where g(aτ ) is a similar transformation onA as above, denotes elementwise multiplication,W1 is a learnable matrix embedding into a hidden state, and W2 ∈ R|l1|×|R
′| is a learnable matrix embedding intoR′
Pseudo Identity-Preserving Transformation Similar to the transformation on the action space, we modify the reward-map transformation to be pseudo identity-preserving as well. This is done by modifying eq. (13) such that the original maps are concatenated on to the beginning of the transformation input vector:
l1(x) = f(W1[m(x),m(x) g(aτ ), aτ ] + b1 (15)
The intended map-preserving transformation is accomplished via initializing W1 and W2 as: W (i,j) ∼ {
1.0 +N (0.0, ) if i = j, i < R N (0.0, ) otherwise (16)
G.3 TARGETED TRANSFORMATION HYPERPARAMETERS
Both of the targeted transformations have several hyperparameters. We conducted a grid search to optimize these in a cross-validated fashion, on a set of test task switches designed to be solved by one of the targeted transformations (Fig. S5). Each was conducted independently of the dynamic voting controller, and independently of the other transformation. Optimal hyperparameters found in these experiments were fixed for use in the integrated dynamic voting controller, and were not further optimized afterwards.
Action Transformation Hyperparameters We conducted three tests using the stimulus-response paradigm: class reversal (in which the left class becomes the right class and vice-versa), a horizontal rotation of the reward boundaries (such that right becomes up and left becomes down), and a “switch” to the original task (intended to test the identity-preserving component).
In this work, we find that a single, non-activated linear transformation (f in (12)) is optimal for this new state-space embedding, using kb ∗ 2 units, and initialized such that the idendity-preserving transformation weights have σ = 0.01. The learning rate for this transformation was found to be optimal at 0.1.
Reward Map Transformation Hyperparameters We conducted two tests using the stimulusresponse paradigm: a “squeezing” task (where there is no longer any reward dispensed on the lower half of the screen), and a “switch” to the original task (intended to test the identity-preserving component).
In this work, we find the optimal activations in eq. (15) to be f(·) = CReS and g(·) = ReLU, with 4 units in the hidden layer. in the weight initialization scheme was found optimal at 0.001, and an initial bias of 0.01. The optimal learning rate for this transformation was found to be 0.01.
G.4 TRANSFORM ABLATION
A study was conducted to determine the relative benefit of the targeted transformations (Fig. S6), where it was determined that the primary contribution of the dynamic neural controller was in fact the voting mechanism (although the transformations did supplement this as well).
G.5 DEPLOYMENT SCHEME OF TASK MODULES
When cued into task transition, the controller freezes the learnable parameters of the old task-module, and deploys a new unitialized task-module. The controller then initializes the action and reward map transformation networks as described in G.2 on top of the old module. These transformations are also voted on inside the dynamic neural controller at every timestep.
H SWITCHING METRICS
Figure S7 graphically illustrates the metrics used inside the paper to quantify switching performance: RGain and TGain.
a Stimulus-Response
image:
reward map:
Environment Reward Policy Reversal
Task Switch
Training Trials
b
Training Trials
Stimulus-Response
image:
reward map:
Environment Reward Policy Rotation
Task Switch
c
Training Trials
Stimulus-Response
image:
reward map:
Environment Reward Policy Squeeze
Task Switch
Figure S5: Action and reward map transformation switch examples. Three task switching experiments were performed to optimize the hyperparameters of the targeted transformations that augment the dynamic neural voting controller. These switches are also retested in the fully-integrated meta-controller and shown in the original switching result figure. a. Binary stimulus-response class reversals, where the left class becomes the right class, and vice-versa. b. Rotations of the binary stimulus-response reward boundaries. c. A “squeezing” of the binary stimulus-response reward boundaries, where no reward is given on the new task on the bottom half of the screen, regardless of class shown.
R el
at iv
e A
U C
G ai
n
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 −0.1
0.0
0.1
0.2
0.3
0.4
0.5
Figure S6: Targeted controller transform ablation. Relative AUC Gain for the EMS module over the same switching snearios in the paper, but with the targeted transformations ablated.
10 20 25 3015 40
0.5
0.6
0.7
0.8
0.9
1.0
R ew
ar d
5
Training episodes
Δmax
tΔmax
tΔmax
ΔmaxTGain =
AUC( t)RGain = AUC(Bt)
Figure S7: Illustration of switching performance metrics. We quantify the switching performance of the dynamic neural controller and task-modules by two metrics: “relative gain in AUC” (ratio of green to purple shaded regions), and “transfer” gain (difference of reward at T∆max). Relative AUC measures the overall gain relative to scratch, and the transfer gain measures the speed of transfer. Curve shown is the EMS module with Single-Unit voting method evaluated on a switch from a 4-way MTS task with two randomly moving class templates to a 4-way MTS task with four randomly moving templates. | 1. How do the various components of the proposed method work together?
2. What is the significance of the example given in Section 3, and how does it relate to the rest of the paper?
3. Could the authors provide more intuition or examples to help readers understand the paper's concepts better?
4. Are there any standard interpretations or notations for the elements described in the paper?
5. How sensitive is the performance of the overall system to the details of its construction? | Review | Review
Reading this paper feels like reading at least two closely-related papers compressed into one, with overflow into the appendix (e.g. one about the EMS module, one about the the recurrent voting, etc).
There were so many aspects/components, that I am not entirely confident I fully understood how they all work together, and in fact I am pretty confident there was at least some part of this that I definitely did not understand. Reading it 5-20 more times would most likely help.
For example, consider the opening example of Section 3. In principle, this kind of example is great, and more of these would be very useful in this paper. This particular one raises a few questions:
-Eq 5 makes it so that $(W \Psi)$ and $(a_x)$ need to be positive or negative together. Why use ReLu's here at all? Why not just $sign( (W \Psi) a_x) $? Multiplying them will do the same thing, and is much simpler. I am probably missing something here, would like to know what it is... (Or, if the point of the artificial complexity is to give an example of the 3 basic principles, then perhaps point this out, or point out why the simpler version I just suggested would not scale up, etc)
-what exactly, in this example, does $\Psi$ correspond to? In prev discussion, $\Psi$ is always written with subscripts to denote state history (I believe), so this is an opportunity to explain what is different here.
-Nitpick: why is a vector written as $W$? (or rather, what is the point of bold vs non-bold here?)
-a non-bold version of $Psi$, a few lines below, seems to correspond to the 4096 features of VGG's FC6, so I am still not sure what the bold version represents
-The defs/eqns at the beginning of section 3.1 (Sc, CReLu, etc) were slightly hard to follow and I wonder whether there were any typos, e.g. was CReS meant to refer directly to Sc, but used the notation ${ReLu}^2$ instead?
Each of these on its own would be easier to overlook, but there is a compounding effect here for me, as a reader, such that by further on in the paper, I am rather confused.
I also wonder whether any of the elements described, have more "standard" interpretations/notations. For example, my slight confusion propagated further: after above point, I then did not have a clear intuition about $l_i$ in the EMS module. I get that symmetry has been built in, e.g. by the definitions of CReS and CReLu, etc, but I still don't see how it all works together, e.g. are late bottleneck architectures *exactly* the same as MLPs, but where inputs have simply been symmetrized, squared, etc? Nor do I have intuition about multiplicative symmetric interactions between visual features and actions, although I do get the sense that if I were to spend several hours implementing/writing out toy examples, it would clarify it significantly (in fact, I wouldn't be too surprised if it turns out to be fairly straightforward, as in my above comment indicating a seeming equivalence to simply multiplying two terms and taking the resulting sign). If the paper didn't need to be quite as dense, then I would suggest providing more elucidation for the reader, either with intuitions or examples or clearer relationships to more familiar formulations.
Later, I did find that some of the info I *needed* in order to understand the results (e.g. exactly what is meant by a "symmetry ablation", how was that implemented?) was in fact in the appendices (of which there are over 8 pages).
I do wonder how sensitive the performance of the overall system is to some of the details, like, e.g. the low-temp Boltzmann sampling rather than identity function, as described at the end of S2.
My confidence in this review is somewhere between 2 and 3.
The problem is an interesting one, the overall approach makes sense, it is clear the authors have done a very substantial amount of work, and very diligently so (well-done!), some of the ideas are interesting and seem creative, but I am not sure I understand the glue of the details, and that might be very important here in order to assess it effectively. |
ICLR | Title
How noise affects the Hessian spectrum in overparameterized neural networks
Abstract
Stochastic gradient descent (SGD) forms the core optimization method for deep neural networks. While some theoretical progress has been made, it still remains unclear why SGD leads the learning dynamics in overparameterized networks to solutions that generalize well. Here we show that for overparameterized networks with a degenerate valley in their loss landscape, SGD on average decreases the trace of the Hessian of the loss. We also generalize this result to other noise structures and show that isotropic noise in the non-degenerate subspace of the Hessian decreases its determinant. In addition to explaining SGDs role in sculpting the Hessian spectrum, this opens the door to new optimization approaches that may confer better generalization performance. We test our results with experiments on toy models and deep neural networks.
1 INTRODUCTION
Deep neural networks have achieved remarkable success in the past decade on tasks that were out of reach prior to the era of deep learning. Yet fundamental questions remain regarding the strong performance of over-parameterized models and optimization schemes that typically involve only first-order information, such as stochastic gradient descent (SGD) and its variants.
Regarding generalization, it has been noted that flat minima with small curvature tend to generalize better than sharp minima (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017). This has been argued by nonvacuous PAC-Bayes bounds (Dziugaite & Roy, 2017) and Bayesian evidence (Smith & Le, 2018). But why does SGD bias learning towards flat minima? One possible explanation was proposed by Zhang et al. (2018) in terms of energy-entropy competition. Jastrzbski et al. (2018) also suggests that isotropic noise in SGD helps networks escape sharp minima with large Hessian determinant. However, previous theoretical analyses assume that minima are isolated and non-singular. In contrast, Sagun et al. (2017b) finds that most of the eigenvalues of the Hessian of the loss function at a minimum are close to zero, indicating highly degenerate minima. The degeneracy is further supported by results on mode connectedness (Garipov et al., 2018; Draxler et al., 2018). Furthermore, it is shown that minima found from different initializations and the same optimization scheme are connected with essentially no barriers between them. Nguyen (2019) also shows theoretically that for a class of deep overparameterized neural nets with piecewise linear activation functions, all of the global minima are connected within a unique and potentially very large global valley.
In this paper, we prove that for models whose loss has a ”minimal valley” structure, defined below, optimization via SGD decreases the trace of Hessian of the loss, by utilizing recent fluctuationdissipation relations (Yaida, 2019). Furthermore, we derive the noise covariance matrix that would result in the reduction of other potentially desired quantities such as the Hessian determinant, leading towards the design of new optimization algorithms. We present experiments on toy models and deep neural networks to confirm our predictions.
2 MAIN THEOREM
In this section, we present our main theorem - how noise during optimization affects the Hessian of the loss function when the loss landscape locally takes the shape of a degenerate valley. More specifically, let N and n be the dimension of the total parameter space and non-degenerate space, respectively, and
consider a general loss function L(w) wherew ∈ RN are the model parameters.1 Since the curvature around the minima can vary, we approximate the loss function in such a valley around a degenerate minimum with a modified quadratic form L(w) = L∗ + 12 (w − P(w))
TH(P(w))(w − P(w)) where P is a function that projects a point w in parameter space to the nearest minimum, and H is the Hessian, a function of the location of the projected minimum. We have the following lemma: Lemma 1. For an arbitrary point w and its neighborhood in the valley, there exists an orthogonal transformation Q and a translation vector v such that the loss function in the new coordinate system θ = Qw − v has the following form,
L(θ) = L∗ + 1
2 n∑ i=1 θ2i λi(θn+1, ..., θN ) , (1)
where λis are the positive eigenvalues of the loss function Hessian for the non-degenerate space and depend on the position in the degenerate space. Also, note that the gradient descent equation is invariant under this transformation.
A detailed, constructive proof of this lemma can be found in Appendix A.1. Notice that this is a generalization of Morse’s Lemma (Callahan, 2010). In the rest of this section, we denote the nondegenerate and degenerate subspaces by θ̄ = (θ1, ..., θn)T and θ̂ = (θn+1, ..., θN )T, respectively. Similarly, the gradients of θ̄ and θ̂ are denoted by ∇̄ and ∇̂, respectively. Notice that at a minimum where θ̄ = 0, the λis are the only nonzero Hessian eigenvalues.
Next we provide a quick review of the relevant fluctuation-dissipation relation formalism for stochastic gradient descent (Yaida, 2019). We denote the loss for a random batchB of training sample as LB(θ). Clearly we have [[∇LB(θ)]]m.b. = ∇L(θ) , (2) where [[•]]m.b. represents the average over all mini-batch realizations. If there exists a stationary state for θ̄, pss(θ̄) and the stationary-average is defined as 〈O(θ̄)〉 ≡ ∫ dθ̄pss(θ̄)O(θ̄) where O(θ̄) is any observable of θ̄, it is straightforward to derive the following master equation:
〈O(θ̄)〉 = 〈[[O[θ̄ − η∇̄LB(θ)]]]m.b.〉 . (3)
We denote the two-point noise matrix as C̃i,j(θ) ≡ [[∂θiLB(θ)∂θjLB(θ)]]m.b. and the noise covariance matrix Ci,j(θ) ≡ C̃i,j(θ)− ∂θiL(θ)∂θjL(θ). Now we present our main theorem: Theorem 1. When the loss can be locally approximated as in Equation 1, assuming that the nondegenerate space θ̄ is in a stationary state at time t and that the noise covariance matrix is aligned with the Hessian, we have 〈[[Tt+1 − Tt]]m.b.〉 ≤ 0 , where T = ∑n i=1 λi is the trace of the Hessian and Tt represents the trace at training step t. Equality holds when∇LB(θ) = ∇L(θ) or ∇̂T (θ̂) = 0.
Theorem 1 indicates that the change of the Hessian trace during SGD optimization is non-positive on average. A trivial condition for equality is optimization with gradient descent (GD) instead of its stochastic variant because the stationary state for noiseless optimization is a constant state with vanishing gradient. An alternate condition for equality is that the trace function of θ̄ reaches a minimum or saddle point.
Before proving Theorem 1, we will first analyze the assumptions made. We emphasize at the outset that we expect our assumptions to be valid only after an early phase of training corresponding to ∼ 10 epochs in our experiments.
2.1 EVIDENCE FOR THE ASSUMPTIONS MADE
In this subsection, we will analyze the assumptions in Theorem 1 and present some experiments that support their validity.
1Here the non-degenerate space corresponds to the subspace of the Hessian with non-zero eigenvalues. Note that we do not require any specific relationship between n and N , e.g. n N .
2.1.1 MINIMAL VALLEY
Our theory relies on the existence of a degenerate valley in the loss function defined previously via Equation 1. In addition to degeneracy, we furthermore assume that there exists a region of connected degenerate minima in the landscape of the loss function, in particular for overparameterized models. SGD or its variants then selects one of the degenerate solutions. Such degeneracy has been observed empirically where most of the eigenvalues of the loss Hessian from various models and tasks are zero or close to zero (Sagun et al., 2017a;b). Furthermore, the connectedness is further supported by results of mode connectedness (Garipov et al., 2018; Draxler et al., 2018). Nguyen (2019) also shows theoretically that for a class of deep overparameterized neural nets with piecewise linear activation functions, all of the global minima are connected within a unique and potentially very large global valley.
Here we present additional experiments to support the existence of such minimal valleys, i.e. basins of attraction of connected, degenerate minima that can be locally described by Equation 1. These experiments will be also used to analyze and confirm our theory later. We consider classification of CIFAR10 with label-smoothing cross entropy (Szegedy et al., 2016)2, constant learning rate of 0.1, momentum 0.9, batch size 500 and total training epochs 150. The network architectures are Preact-ResNet18 (He et al., 2016) and VGG16 (Simonyan & Zisserman, 2015). The training and validation loss are shown in Figure 1 (inset). After 150 training epochs, the networks fluctuates with small training loss, and the minimal training loss for both case is ∼ 0.005. In order to confirm the existence of minimal valleys, we would like to project each point along the training path to the corresponding closest minimum 3 and check whether the projected path crosses any barriers. To determine whether two minima can be connected by a line segment in a minimal valley implying no barriers in between, we use line interpolation to connect these two minima and measure the training loss along the interpolating line. In practice, we call it line connectedness if we sample evenly 10 points in the line segment between consecutive minima and the training loss is smaller than a threshold for all 10 points.
To project a state to a closest minimum, the model is trained from that state with the same hyperparameters except using GD instead of SGD to remove noise. The stopping criterion for these auxiliary training tasks is that the training loss is less than the minimal training loss, which is ∼ 0.005. Ideally, we would project states after each iteration and check for line connectedness, but this involves large unnecessary calculations. In practice, if state B is obtained from n training iterations after state A, and A and B are not line connected, we insert C which is obtained from n/2 training iterations after A and check for line connectedness between A and C and between C and B. This process stops when there is no extra iteration between A and B or they are line connected. Starting from each pair of consecutive epochs as A and B, we obtain the projected path recursively. The training and validation losses on these two projected paths are shown in Figure 1 (main plot). The left path has 2700 projected points and the right has 4020. We see that after a few epochs, the training loss along the path remains close to zero (and hence minimal), which means that there exists a least one minimal valley connected to the model state found by regular SGD. Also SGD optimization happens within such a valley after the first few epochs. Furthermore, the validation loss varies along the valley.
More experiments can be found in Appendix A.2. Further discussion of violation of this minimal valley assumption can be found in Appendix A.6.
2.1.2 THE HESSIAN-NOISE COVARIANCE ALIGNMENT
Empirical observations have found that the noise covariance matrix is aligned with the Hessian (Zhang et al., 2018; Zhu et al., 2019; Jastrzbski et al., 2018). In fact, it was shown in Hoffer et al. (2017) that
Ci,j = 1
S
( 1− S
M
) 1
M M∑ k=1 ∂θi l(xk,θ)∂θj l(xk,θ) ,
2The label-smoothing technique is used here to make a true minimum of the cost function exist. Regular cross entropy gives similar results.
3We use minimum here but specifically we mean a point with training loss less than a threshold.
where S is the mini-batch size and M is the total number of training samples. Considering negative log-likelihood loss functions, which is also the loss function in our experiments for classification, the loss L can be written as L = − 1M ∑M i=k log f(xk,θ) where f is the output of the network and l(xk,θ) = − log f(xk,θ). Notice that
∂θi∂θjL(θ) = 1
M M∑ k=1 ∂θi l(xk,θ)∂θj l(xk,θ)− 1 M M∑ k=1
1
f(xk,θ) ∂θi∂θjf(xk,θ) , (4)
where the left-hand side is the positive Hessian matrix for i, j ≤ n and the first term on the righthand side is the positive matrix proportional to the noise covariance matrix. Empirically, negative eigenvalues are exponentially small in magnitude compared to major positive ones when a model is close to a minimum (Sagun et al., 2017b) and negative eigenvalues can only come from the contribution of the second term on the right-hand side of equation (4). Therefore unless the second term is extremely asymmetrical, we can assume that its contribution is small compared to the first term. We will return to this in Section 4.1 and show that eigenvalues with small magnitude arise from higher-order curvature in the degenerate space even when the Hessian with respect to the bottom of the valley is characterized by the positive λis for i = 1, ..., n or zero directions. Ignoring the second term on the right hand side, Equation 4 becomes
Ci,j = 1
S
( 1− S
M
) Hi,j . (5)
We emphasize that this assumption will only be valid after the early phase of training, not from initialization. Further discussion and experimental support can be found in Appendix A.4.
2.1.3 TIMESCALE SEPARATION
The final assumption is that the dynamics relaxes to a stationary state in the non-degenerate space θ̄ but not in the degenerate space. This assumption is because there is a timescale separation between the dynamics of θ̄, which relax quickly and the dynamics of θ̂, which evolve much more slowly as the minimal valley is traversed.
Considering a simplified optimization problem where parameter directions undergo independent Levy-driven Ornstein-Uhlenbeck processes4 as for quadratic loss with noise, the decay to a stationary state is exponential with decay rate proportional the corresponding eigenvalues (Abdelrazeq et al., 2014). Since the θ̄ correspond to the leading order eigenvalues, their relaxation time is exponentially shorter than the evolution in the degenerate space. Therefore it is reasonable to assume a stationary state in the non-degenerate space θ̄ when studying the degenerate θ̂ dynamics. Further discussion and experimental support of this assumption using fluctuation-dissipation relations can be found in Appendix A.5.
4Simsekli et al. (2019) proposes to analyze SGD as an SDE driven by a Levy motion.
2.2 PROOF OF THEOREM 1
Proof. The stochastic gradient descent update for θ̂ is θ̂t+1 = θ̂t − η∇̂LB(θt). If we consider the average evolution of the eigenvalues λi for i = 1, ..., n from step t to t + 1, we have
〈[[λi,t+1 − λi,t]]m.b.〉 = 〈[[λi(θ̂t+1)− λi(θ̂t)]]m.b.〉 = 〈[[λi(θ̂t − η∇̂LB(θt))− λi(θ̂t)]]m.b.〉 = −η〈[[∇̂λi(θ̂t)T∇̂LB(θt))]]m.b.〉+O(η2) = −η〈∇̂λi(θ̂t)T∇̂L(θt))〉+O(η2) ,
(6)
where we performed a Taylor expansion from the second to the third equality and the fourth equality holds from equation (2).
Notice that ∇̂L(θ) = 12 ∑n i=1 θ 2 i ∇̂λi(θ̂). By keeping the leading order term, we have
〈[[λi,t+1 − λi,t]]m.b.〉 ' − η
2 n∑ j=1 〈θ2j 〉∇̂λi(θ̂t)T∇̂λj(θ̂t) . (7)
Considering the observables O(θ̄) ≡ θ2i for i = 1, ..., n, the master equation (3) becomes
〈θi∂θiL〉 = η
2 〈C̃i,i〉 .
Notice that ∂θiL = θiλi(θ̂), we thus have
〈θ2i 〉 = η〈C̃i,i〉 2λi(θ̂) . (8)
By the definition of the two-point noise matrix C̃ and the noise covariance matrix C, we also have
C̃i,i = Ci,i + ∂θiL(θ)∂θiL(θ)
= Ci,i + θ 2 i λ 2 i (θ̂)
(9)
Based on the assumption that the noise covariance matrix is aligned with the Hessian and scales as 1 S ( 1− SM ) where S is the mini-batch size and M is the total number of training samples (Hoffer et al., 2017)5, we have
Ci,i = 1
S
( 1− S
M
) λi(θ̂) . (10)
Together with Eq. 8, 9 and 10, we have for i ≤ n,
〈θ2i 〉 = η
S(2− ηλi)
( 1− S
M ) = η
2S
( 1− S
M
) +O(η2)
(11)
Now Eq. 7 becomes
〈[[λi,t+1 − λi,t]]m.b.〉 = − η2
4S
( 1− S
M ) n∑ j=1 ∇̂λi(θ̂t)T∇̂λj(θ̂t) +O(η3)
Next we consider the evolution of the trace of the Hessian T = ∑n i=1 λi. We have
〈[[Tt+1 − Tt]]m.b.〉 = − η2
4S
( 1− S
M ) n∑ i,j=1 ∇̂λi(θ̂t)T∇̂λj(θ̂t) +O(η3)
≈ − η 2
4S
( 1− S
M )∥∥∥ n∑ i=1 ∇̂λi(θ̂) ∥∥∥2
≤ 0 . 5Refer to Assumption 2.1.2 for details.
Thus we have shown that the trace of Hessian decreases on average during SGD optimization. Notice that equality holds when either one of the following conditions is satisfied: first, if S = M so that SGD becomes full-batch GD, and second that ∑n i=1 ∇̂λi(θ̂) = 0, which implies ∇̂T (θ̂) = 0.
3 OPTIMIZATION DYNAMICS WITH NOISE BEYOND SGD
In Section 2, we concluded that, to leading order in the learning rate η, SGD dynamics reduces the Hessian trace. This result originates from the noise introduced by SGD with covariance given by (10). Interestingly, other forms of noise can be introduced into gradient descent in order to decrease other desired quantities instead of the trace. Here we present a theorem that relates the expected decrease of certain functions of the Hessian spectrum to a corresponding noise covariance. And 〈•〉 represents an average over the stationary state and [[•]] averages the quantity over the corresponding noise. Theorem 2. In a minimal valley with loss approximation in Equation 1, assuming that there exists a stationary state in the non-degenerate space θ̄, for any quantity f(λ) satisfying ∂f(λ)∂λi ≥ 0 for i = 1, ..., n, where λ ≡ (λ1, ..., λn)T, if the loss is optimized with gradient descent along with external noise with diagonal covariance matrix
Ci,i = λi ∂f(λ)
∂λi , (12)
we have 〈[[ft+1 − ft]]〉 ≤ 0 ,
where ft represents f(λ) evaluated at training step t. Equality holds when ∇̂f(λ(θ̂)) = 0.
Proof. Similar to the proof of Theorem 1, we have
〈[[ft+1 − ft]]〉 = − η
2 n∑ i=1 ∂f ∂λi n∑ j=1 〈θ2j 〉∇̂λTi ∇̂λj +O(η2) . (13)
With the imposed noise with covariance matrix (12), the master equation (3) and the relation between noise covariance matrix and two-point noise matrix in equation (9), we have
〈θ2i 〉 = η (2− ηλi) ∂f ∂λi
= η
2
∂f ∂λi +O(η2) .
(14)
Plugging in Equation 13 we have
〈[[ft+1 − ft]]〉 = − η2
4 ∥∥∥ n∑ i=1 ∂f ∂λi ∇̂λi(θ̂) ∥∥∥2 +O(η3) ≈ −η 2
4 ∥∥∥ n∑ i=1 ∇̂f(λ(θ̂)) ∥∥∥2
≤ 0 .
(15)
Thus we have shown that f decreases on average during optimization. Equality holds when ∇̂f(λ(θ̂)) = 0. Notice that Theorem 1 is a special case with f(λ) = ∑n i=1 λi.
Furthermore, we have the following corollary: Corollary 2.1 (Determinant-Decreasing Noise). With the same assumptions in Theorem 2, let f(λ) = log ∏n i=1 λi and Ci,i = C where C is a arbitrary positive constant, we have
〈[[Dett+1 −Dett]]〉 ≤ 0 , where Det ≡ ∏n i=1 λi is the non-degenerate determinant.
Corollary 2.1 indicates that the non-degenerate determinant will decrease on average during optimization if we introduce isotropic noise in the non-degenerate space. It is known (Smith & Le, 2018) that the Bayesian evidence, minimization of which minimizes a PAC-Bayes generalization bound, has a contribution from the non-degenerate determinant. This contribution is also called the Occam factor. Our result thus leads to a new algorithm where one introduces isotropic noise into the non-degenerate space. Theorem 2 will allow future work to add designed noise that applies an entropic force on other properties of the Hessian yet to be determined.
4 EXTENDED ANALYSIS AND EXPERIMENTS
4.1 NEGATIVE EIGENVALUES OF THE HESSIAN
Empirically, small negative eigenvalues arise even with small training loss, which seems to indicate that the model is at a saddle point instead of a minimum (Sagun et al., 2017b; Ghorbani et al., 2019). This however is consistent with the loss in Equation 1. The loss is minimal when θi = 0 for i ≤ n. However, when we use a trained instance to estimate the eigenvalues of the Hessian, we only guarantee that θi is close to zero (and thus nonzero training loss) for i ≤ n. Therefore, there could exist negative eigenvalues which originate from the second derivative of θi for i > n. They are originally zero at the minimum with θi = 0 for i ≤ n, and their magnitude is suppressed by θ2i for i ≤ n which is related to the small training loss. In Appendix A.3, we predict that the negative eigenvalues on average are proportional to the training loss. To confirm this, we set up a simple experiment to classify MNIST data with a small training set of 500 randomly selected samples, with a single hidden layer fully-connected network with 6220 parameters and loss given by label-smoothed cross entropy, again to make a genuine minimum. We first obtain a minimum with gradient descent optimization at learning rate 0.1 and 10k training epochs and use this as initialization for later. The model is then further trained from this initialization with different batch sizes from 5 to 250 and learning rates from 0.001 to 0.1 in order to equilibrate to various training losses. Then the negative eigenvalues are calculated and the result is shown in Fig. 2(a). The result shows a linear relationship between training loss and the sum of negative eigenvalues, as predicted.
4.2 THE EVOLUTION OF TRACE AND DETERMINANT WITH A TOY MODEL
In this section, we use two toy models to test our previous theoretical analysis. In Section 2, we showed that the trace evolution rate is proportional to η2. Notice that the two factors of η have different origins. One is from the expansion of λ and the gradient descent step as in Eq. 6. The other contribution is the equilibrium statistics in the non-degenerate directions as in Eq. 8. Furthermore, the coefficient of the proportionality relation also depends on ∇̂λ. Therefore to test this prediction for how learning rate affects the trace evolution, we train a model multiple times with different ηs. When doing this, the initialization for each run is in equilibrium in the non-degenerate space, and we choose a loss function for which ∇̂λ is constant. We design a simple four-dimensional toy model with the loss L(θ) = |θ3 + θ4|θ21 + |θ3 + 2θ4|θ22 to test this effect. We initialize the model to make sure that |θ3 + θ4| and |θ3 + 2θ4| don’t change sign during a few iterations of training. For each η, we first train for 1000 iterations with SGD noise to equilibrate the model and calculate the trace. Then the model is trained for another 1000 iterations and we calculate the trace evolution. The result is shown in Fig 2(b), which shows a linear relation as predicted.
Next, we measure the evolution of the Hessian trace and determinant after introducing noise with different covariance structure. Our theory predicts that SGD noise decreases the trace while isotropic noise in the non-degenerate subspace decreases the determinant. To test this, we design a toy model where the behavior of these two quantities is anti-correlated. The loss can be written as L(θ) = (4− 2e−θ23−θ24 )θ21 + e−(θ3−θ4) 2
θ22 . We train the model with the same initialization and with the two different noise structures. The result is shown in Fig. 2(c)-(d), consistent with our analysis.
4.3 TRACE EVOLUTION ALONG PROJECTED PATH
In Section 2, we presented two experiments to support the existence of a minimal valley. Recall that the projected paths follow along the bottom of the valley associated with the SGD dynamics. Our theory predicts that the Hessian trace along the projected paths should decrease during optimization. To confirm this, we use method in Bai et al. (1996) to estimate the Hessian trace and the results are shown in Figure 3, in which the values and error bars are calculated by the mean and standard deviation of 10 different initializations. As seen in the figure, the Hessian trace indeed decreases on average in both cases, which is consistent with our theoretical prediction. Notice that test performance does not monotonically improve during optimization. Indeed the trace continues decreasing slightly even when validation loss goes up, indicating that the trace itself is not sufficient to describe generalization.
5 CONCLUSION
In this paper, we showed that within a minimal valley, the trace of the loss function Hessian decreases on average. We also demonstrated how to design the covariance structure of added noise during optimization to affect other properties of the Hessian’s evolution, opening the door to new algorithms. The analysis in this paper can be extended and generalized to models trained on much larger data sets.
A APPENDIX
A.1 PROOF OF LEMMA 1
To study the dynamics of SGD, we investigate the model behavior starting from θt to θt+1 by one-step training and assume that θt+1 is close to θt. In this subsection, we derive the property of the loss function in a small neighbourhood, B(θt), of θt, which includes θt+1. We consider the model in a minimal valley with a setM of minima and define a projection P that maps any point θ in B(θt) to the minimum inM that is closest to θ, i.e.
P(θ) ≡ argmin θ∗∈M
‖θ − θ∗‖2 . (16)
BecauseM is connected by definition of minimal valley, we assume that P is also continuous in B(θt). We denote the dimension of whole parameter space andM by N and N − n, respectively. We can then make a quadratic approximation to the loss function in B(θt),
L(θ) = L∗ + 1
2 (θ − P(θ))TH(P(θ))(θ − P(θ)) , (17)
where we expand the loss function at the minimum that is closest to θ, and H(P(θ)) is the Hessian depending on the position P(θ) in the valley. L∗ is a constant by definition of a minimal valley and will be ignored in the following derivations.
We define a minimal path to be a smooth path in a minimal valley where every point on the path is a minimum. For any path passing through a minimum θ∗, we have the tangent line of the minimal path at θ∗ being in the null space of the Hessian at θ∗ by definition. Precisely, for an arbitrary minimal path θ∗(µ) : [0, 1]→M, we have
H(θ∗(µ)) dθ∗(µ)
du = 0 . (18)
Next we have the following lemma,
Lemma 2. Let J(θ) be the N ×N Jacobian matrix of function P at θ defined as Jij = dP(θ)idθj . We have
H(P(θ))J(θ) = 0 . (19)
Proof. Notice that θ∗(µ) ≡ P(θ + µ∆θ) forms a minimal path through θ where µ ∈ [0, 1] and ∆θ is an arbitrary direction in the parameter space. We have
dθ∗(µ)
du ∣∣∣ µ=0 = J(θ)∆θ . (20)
By Eq. 18, we have H(P(θ))J(θ)∆θ = 0 . (21)
Since ∆θ is arbitrary, we have Eq. 19.
Next we consider the diagonalization of the Hessian as H(P(θ)) = V (P(θ))TS(P(θ))V (P(θ)) , (22)
where S(P(θ)) = diag(λ1(P(θ)), ..., λn(P(θ)), 0, ..., 0) and λi > 0 for 1 ≤ i ≤ n. As suggested by Gur-Ari et al. (2018), we assume that the eigenspace change is negligible in the small neighbourhood B(θt) and denote constant orthogonal matrix V ≡ V (P(θt)) ≈ V (P(θ)). We then perform the coordinate transformation
φ = V θ , (23) and the loss function becomes
L(φ) = 1
2 n∑ i=1 (φi − P̃(φ)i)2λi(P̃(φ)) , (24)
where P̃(φ) is the same projection map as P(θ) but in the coordinate of φ. Notice here that P̃(φ) is the projection to the minimum with the least L-2 distance to φ, because the coordinate transformation is orthogonal and distance preserved. We have
P̃(φ) = V P(θ) . (25) We have the following theorem, Proposition 3. P̃(φ)i is constant for i ≤ n.
Proof. In φ-coordinate, by Eq. 23 and 25 the Jacobian of P̃(φ), J̃(φ)ij ≡ dP̃(φ)idφj , is related to J(θ) as J̃(φ) = V J(θ)V T . (26) Therefore Lemma 2 becomes S(P(θ))J̃(φ) = 0 . (27) Notice that [S(P(θ))J̃(φ)]ij = ∑n k=1 λkδikJ̃(φ)kj where δik is the Kronecker delta function, we have [S(P(θ))J̃(φ)]ij = λiJ̃(φ)ij , i ≤ n . (28) From Eq. 27 and λi > 0, we have J̃(φ)ij = 0 , i ≤ n . (29)
Notice that J̃(φ)ij ≡ dP̃(φ)idφj = 0 for i ≤ n and all j, we have that P̃(φ)i is constant for i ≤ n.
We denote P̃(φ)i as φ∗i for i ≤ n. Therefore we can define a global translation vector φ∗ = (φ∗1, ..., φ ∗ n, 0, ..., 0) T and consider a new set of coordinate ψ = φ− φ∗ and have
L(ψ) = 1
2 n∑ i=1 ψ2i λi(P̄(ψ)) , (30)
where P̄(ψ) is also the projection to the minimum with the least L-2 distance to ψ. Clearly we have the set of minima of the loss function 30 as {ψ|ψi = 0, i ≤ n} and the closest minimum to ψ in the L-2 distance is ψ∗ = (0, ..., 0, ψn+1, ..., ψN )T. Therefore we have
P̄(ψ) = (0, ..., 0, ψn+1, ..., ψN )T , (31) and
L(ψ) = 1
2 n∑ i=1 ψ2i λi(ψn+1, ..., ψN ) , (32)
Notice that the coordinate transformations from θ to ψ only involve orthogonal rotation and translation, the SGD rule is conservative. To see this, we first have
∇θ = V T∇φ = V T∇ψ . (33) And the SGD on θ becomes
θt+1 = θt − ηV T∇ψL(ψ) ⇒ φt+1 = φt − η∇ψL(ψ) ⇒ ψt+1 = ψt − η∇ψL(ψ) .
(34)
In the main text, we replace the notation ψ with θ for symbol simplicity.
A.2 ADDITIONAL EXPERIMENTS
To further support our assumption of connected minima, we conducted extra experiments including DenseNet (Huang et al., 2017) on CIFAR100, and explored the landscape by introducing randomized labels on a subset of the training data to initialize subsequent training on purely clean data (details in Section A.2.2).
A.2.1 DENSENET ON CIFAR100
We trained a DenseNet with batch normalization (number of blocks = [6,12,24,16] and growth rate=12) on CIFAR100 and followed the same pipeline as described in the main text experiments. The learning rate is 0.1 and the batch size is 512. The result is shown in Figure 4(a) which supports the connectedness assumption as well as the result of the Hessian trace decreasing under SGD.
A.2.2 4-LAYER CONVNET ON MNIST AND RESNET18 ON CIFAR10 WITH RANDOMIZED LABELS
The setup of these experiments is as follows: we randomly split the training set into two sets, A and B. For MNIST and CIFAR10 we let the number of samples in A be 40000. Then we shuffle the labels in set B. For the rest of this section, we call A the training set and B the shuffle set. The loss minimum, Hessian trace, etc. are all with respective to the training set A.
First we will obtain minima with worse generalization than the minima found by regular SGD. To do so, we train the model with the union of the training set A and shuffle set B until convergence. Then we further train the model with only A using gradient descent so the model reaches a minimum. This minimum corresponds to the left end point of the path in Figure 4 (b) and (c).
We then train the model from this minimum with training set A alone using SGD. We find that the model starts to diffuse and validation loss becomes smaller even though it is at minimum already. Using the same method described in main text, we project this path to minimal and the results are shown in Figure 4 (b) and (c).
In both cases, we found that minima found by SGD are connected, and we further estimated the trace along both paths and found that it is decreasing on average. These results support the connectedness and degeneracy assumption and our result of SGD decreasing Hessian on average. It also illustrates how SGD helps model explore the loss landscape and escape bad minima.
A.3 THE SUM OF NEGATIVE EIGENVALUES
In fact, if we consider an equilibrium of θi for i ≤ n, we have 〈L〉 = L∗ + C ∑n i=1 λi and
〈∂j∂kL〉 = C ∑n i=1 ∂j∂kλi for j, k > n from Eq. 11, where C = η 4S (1− S M ). Thus as we change learning rate or batch size, and hence C, 〈∂j∂kL〉 ∝ 〈L〉 −L∗ for states nearby in the minimal valley because both are proportional to C and ∑n i=1 λi and ∂j∂kλi are approximately constant for states nearby in the minimal valley. Usually in deep neural networks training loss can go to nearly zero with a large number of training iterations and small learning rate (Zhang et al., 2017; Du et al., 2019), so we ignore L∗ and have that 〈∂j∂kL〉 ∝ 〈L〉. This result is tested and verified in Figure 2(a).
A.4 EXPERIMENTS ON THE HESSIAN-NOISE COVARIANCE ALIGNMENT ASSUMPTION
To further support our assumption of the Hessian-noise covariance alignment, we trained 2-layer neural networks on the CIFAR10 dataset with label-smoothed cross-entropy loss and mean-squared loss (MSE) and calculated the cosine similarity between the Hessian of the loss (H) and the noise covariance (C).
The cosine similarity between two matrices A and B is defined as
CS(A,B) = Tr(ABT )√
Tr(AAT )Tr(BBT ) , (35)
where Tr is the trace operator. It is easy to show that A and B are aligned as A = kB, k > 0 if and only if CS(A,B) = 1.
In order to make calculate of the Hessian and covariance tractable, the networks were chosen to have one hidden layer with 70 nodes, and the dataset is transformed into grey scale and resized to 7× 7 pixels and normalized. The network with label-smoothed cross-entropy loss is trained with SGD optimization with learning rate 0.001 and batch size 512 and the network with mean-squared loss is trained with SGD optimization with learning rate 0.1 and batch size 512.
As shown in Figure 5, in both cases the cosine similarity between the Hessian and noise covariance is larger than 0.9 after several epochs of initial training. Notice that two random matrices with the same size as H and C in these experiments are almost orthogonal with the cosine similarity of 0. These experiments are consistent with our assumption of Hessian-noise covariance alignment–after the early phase of training–in the main text.
A.5 EXPERIMENT ON THE TIMESCALE SEPARATION ASSUMPTION
In this section, we provide support for our assumption of timescale separation through experiments with a small convolutional neural network trained on CIFAR10. In order to observe the relaxation to steady-state, we adopt the method in Yaida (2019), which shows that
〈θ · ∇L〉 = η(1 + µ) 2(1− ν) 〈v2〉 (FDR)
is satisfied when the model is at stationarity, where µ and ν are the momentum and dampening in SGD optimization, respectively, and v is the velocity. It is easy to show that
〈P(θ) · P(∇L)〉 = η(1 + µ) 2(1− ν) 〈P(v)2〉 , (FDR’)
when there exists a stationary state in a subspace S and P is the projection operator to S. We use OL and OR to denote the quantities of the left and right side respectively of FDR and FDR’. Therefore the quantity |ŌL/ŌR − 1| can be used as a criterion to verify how close the model is to a stationary state in a specific subspace.
To show the existence of timescale separation, we train a small CNN on CIFAR10. This network has 3 convolutional layers with kernel size of 3 and output channel number of 15, 20, 10 respectively. Throughout training, we estimate |ŌL/ŌR − 1| on the raw parameter space and top-1 eigenspace of the Hessian. The result is shown in Figure 6.
Figure 6 shows that the criterion in top-1 eigenvalue decays much faster than in the raw parameter space. After several epochs, |ŌL/ŌR − 1| in top-1 subspace is already less then 10−2 which is a threshold set in Yaida (2019) to indicate sufficient closeness to a stationary state, while the state in the whole parameter space is still far from equilibrium after 50 epoch of training. This experiment fully supports our assumption of timescale separation in the main text.
A.6 EXTENSION TO NON-DEGENERATE LANDSCAPE
This section explores the situation where the minimal valley assumption is slightly violated, in which case the bottom of the landscape is not completely degenerate but has a small slope. The loss function 1 can then be written as
L(θ) = L∗ + 1
2 n∑ i=1 θ2i λi(θn+1, ..., θN ) + g(θn+1, ..., θN ) , (36)
where the function g describes how the bottom of the valley varies in the previously degenerate directions. Following the same proof as in Section 2.2, it is easy to obtain
〈[[Tt+1 − Tt]]m.b.〉 ≈ − η2
4S
( 1− S
M )∥∥∥ n∑ i=1 ∇̂λi(θ̂) ∥∥∥2 − η n∑ i=1 ∇̂λi(θ̂)∇̂g(θ̂) . (37)
At the end of training, the model reaches equilibrium, where
〈[[Tt+1 − Tt]]m.b.〉 = 0 .
Thus, from Equation 37 we have
∇̂g(θ̂) n∑ i=1 ∇̂λi(θ̂) = − η 4S ( 1− S M )∥∥∥ n∑ i=1 ∇̂λi(θ̂) ∥∥∥2 (38)
This reveals explicitly the effect of SGD noise as an entropic force during training. The training of gradient descent and its variant drives the model to states with θ̄ = 0 and ∇̂g(θ̂) = 0 according to the energy function 36. This exert energetic force to the model moving forward to the lowest energy (smallest loss). However when training with noise, ∇̂g(θ̂) = 0 may not be satisfied when the locations with smallest trace and ∇̂g(θ̂) = 0 do not coincide because the right hand side of Equation 38 is always greater or equal to zero. That being said the model is pushed away from ∇̂g(θ̂) = 0 as a entropic competitor to the energetic force.
We use a toy model to visualize this effect. Consider a loss function ofL(θ1, θ2) = 12 (θ 2 2+1)θ 2 1+α∗θ2. We train this function by GD and SGD with α = 10−4, learning rate η = 0.01, noise strength 1 S − 1 M = 1 and initialization (θ1, θ2) = (0.1, 0.1). The result is shown in Figure 7. The left column corresponds to training with gradient descent where θ1 drops to zero very fast and θ2 continues decreasing to negative infinity. As when noise is introduced, θ2 stops decrease after equilibrium between entropic and energetic force. From 38 it is easy to show that 〈θ2〉 = −0.02, which is consistent with the right column of Figure 7. | 1. What is the focus of the paper regarding noise in gradient-based updates during neural network training?
2. What are the interesting claims and observations made in the paper, particularly regarding the impact of noise on the geometry of the Hessian matrix?
3. What is the main concern of the reviewer regarding the paper's clarity?
4. Where does the reviewer suggest the timescale separation is used in the paper, and for what purpose?
5. Are there any missing citations or references in the paper that the reviewer has identified? | Review | Review
This paper considers the problem of how noise coming from the gradient-based update affects the geometry of the hessian matrix when training a neural network.
The paper makes an interesting claim that around a local minimum, if the noise in SGD is aligned with the hessian matrix of the network, then doing SGD update is implicitly minimizing the trace of the hessian matrix, biasing the current point towards a wide valley.
The paper also makes a very interesting observation that isotropic noise will decrease the determinate of the hessian while the SGD noise will decrease the trace of the hessian matrix.
I find the theorem in the paper quite interesting, especially Lemma 1 stating that the loss function can be locally approximated by a quadratic function whose variables are the non-degenerate directions and the coefficients only depends on the degenerate directions. This Lemma appears to be quite novel to me. With this Lemma, it is then easy to see that as long as the noise is aligned with the non-degenerate directions, then the trace of the Hessian is decreasing in expectation at every step.
The main question I have about this paper is the clarity: First of all, the main concept "noise covariance matrix aligned with the hessian" is not mathematically defined anywhere. I can intuitively understand the term from the explanation in section 2.1.2 but I can not formally justify the correctness. Second, where is the timescale separation used? Is it for justifying the assumption of the local stationary point approximation or for Lemma 1 or something else? At the current level of writing in the paper, I can not formally verity the theorems.
Missing citation: "an alternative view, when does SGD escape local minimal."
After Rebuttal: I have read the authors' responses and acknowledge the sensibility of the statement. |
ICLR | Title
How noise affects the Hessian spectrum in overparameterized neural networks
Abstract
Stochastic gradient descent (SGD) forms the core optimization method for deep neural networks. While some theoretical progress has been made, it still remains unclear why SGD leads the learning dynamics in overparameterized networks to solutions that generalize well. Here we show that for overparameterized networks with a degenerate valley in their loss landscape, SGD on average decreases the trace of the Hessian of the loss. We also generalize this result to other noise structures and show that isotropic noise in the non-degenerate subspace of the Hessian decreases its determinant. In addition to explaining SGDs role in sculpting the Hessian spectrum, this opens the door to new optimization approaches that may confer better generalization performance. We test our results with experiments on toy models and deep neural networks.
1 INTRODUCTION
Deep neural networks have achieved remarkable success in the past decade on tasks that were out of reach prior to the era of deep learning. Yet fundamental questions remain regarding the strong performance of over-parameterized models and optimization schemes that typically involve only first-order information, such as stochastic gradient descent (SGD) and its variants.
Regarding generalization, it has been noted that flat minima with small curvature tend to generalize better than sharp minima (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017). This has been argued by nonvacuous PAC-Bayes bounds (Dziugaite & Roy, 2017) and Bayesian evidence (Smith & Le, 2018). But why does SGD bias learning towards flat minima? One possible explanation was proposed by Zhang et al. (2018) in terms of energy-entropy competition. Jastrzbski et al. (2018) also suggests that isotropic noise in SGD helps networks escape sharp minima with large Hessian determinant. However, previous theoretical analyses assume that minima are isolated and non-singular. In contrast, Sagun et al. (2017b) finds that most of the eigenvalues of the Hessian of the loss function at a minimum are close to zero, indicating highly degenerate minima. The degeneracy is further supported by results on mode connectedness (Garipov et al., 2018; Draxler et al., 2018). Furthermore, it is shown that minima found from different initializations and the same optimization scheme are connected with essentially no barriers between them. Nguyen (2019) also shows theoretically that for a class of deep overparameterized neural nets with piecewise linear activation functions, all of the global minima are connected within a unique and potentially very large global valley.
In this paper, we prove that for models whose loss has a ”minimal valley” structure, defined below, optimization via SGD decreases the trace of Hessian of the loss, by utilizing recent fluctuationdissipation relations (Yaida, 2019). Furthermore, we derive the noise covariance matrix that would result in the reduction of other potentially desired quantities such as the Hessian determinant, leading towards the design of new optimization algorithms. We present experiments on toy models and deep neural networks to confirm our predictions.
2 MAIN THEOREM
In this section, we present our main theorem - how noise during optimization affects the Hessian of the loss function when the loss landscape locally takes the shape of a degenerate valley. More specifically, let N and n be the dimension of the total parameter space and non-degenerate space, respectively, and
consider a general loss function L(w) wherew ∈ RN are the model parameters.1 Since the curvature around the minima can vary, we approximate the loss function in such a valley around a degenerate minimum with a modified quadratic form L(w) = L∗ + 12 (w − P(w))
TH(P(w))(w − P(w)) where P is a function that projects a point w in parameter space to the nearest minimum, and H is the Hessian, a function of the location of the projected minimum. We have the following lemma: Lemma 1. For an arbitrary point w and its neighborhood in the valley, there exists an orthogonal transformation Q and a translation vector v such that the loss function in the new coordinate system θ = Qw − v has the following form,
L(θ) = L∗ + 1
2 n∑ i=1 θ2i λi(θn+1, ..., θN ) , (1)
where λis are the positive eigenvalues of the loss function Hessian for the non-degenerate space and depend on the position in the degenerate space. Also, note that the gradient descent equation is invariant under this transformation.
A detailed, constructive proof of this lemma can be found in Appendix A.1. Notice that this is a generalization of Morse’s Lemma (Callahan, 2010). In the rest of this section, we denote the nondegenerate and degenerate subspaces by θ̄ = (θ1, ..., θn)T and θ̂ = (θn+1, ..., θN )T, respectively. Similarly, the gradients of θ̄ and θ̂ are denoted by ∇̄ and ∇̂, respectively. Notice that at a minimum where θ̄ = 0, the λis are the only nonzero Hessian eigenvalues.
Next we provide a quick review of the relevant fluctuation-dissipation relation formalism for stochastic gradient descent (Yaida, 2019). We denote the loss for a random batchB of training sample as LB(θ). Clearly we have [[∇LB(θ)]]m.b. = ∇L(θ) , (2) where [[•]]m.b. represents the average over all mini-batch realizations. If there exists a stationary state for θ̄, pss(θ̄) and the stationary-average is defined as 〈O(θ̄)〉 ≡ ∫ dθ̄pss(θ̄)O(θ̄) where O(θ̄) is any observable of θ̄, it is straightforward to derive the following master equation:
〈O(θ̄)〉 = 〈[[O[θ̄ − η∇̄LB(θ)]]]m.b.〉 . (3)
We denote the two-point noise matrix as C̃i,j(θ) ≡ [[∂θiLB(θ)∂θjLB(θ)]]m.b. and the noise covariance matrix Ci,j(θ) ≡ C̃i,j(θ)− ∂θiL(θ)∂θjL(θ). Now we present our main theorem: Theorem 1. When the loss can be locally approximated as in Equation 1, assuming that the nondegenerate space θ̄ is in a stationary state at time t and that the noise covariance matrix is aligned with the Hessian, we have 〈[[Tt+1 − Tt]]m.b.〉 ≤ 0 , where T = ∑n i=1 λi is the trace of the Hessian and Tt represents the trace at training step t. Equality holds when∇LB(θ) = ∇L(θ) or ∇̂T (θ̂) = 0.
Theorem 1 indicates that the change of the Hessian trace during SGD optimization is non-positive on average. A trivial condition for equality is optimization with gradient descent (GD) instead of its stochastic variant because the stationary state for noiseless optimization is a constant state with vanishing gradient. An alternate condition for equality is that the trace function of θ̄ reaches a minimum or saddle point.
Before proving Theorem 1, we will first analyze the assumptions made. We emphasize at the outset that we expect our assumptions to be valid only after an early phase of training corresponding to ∼ 10 epochs in our experiments.
2.1 EVIDENCE FOR THE ASSUMPTIONS MADE
In this subsection, we will analyze the assumptions in Theorem 1 and present some experiments that support their validity.
1Here the non-degenerate space corresponds to the subspace of the Hessian with non-zero eigenvalues. Note that we do not require any specific relationship between n and N , e.g. n N .
2.1.1 MINIMAL VALLEY
Our theory relies on the existence of a degenerate valley in the loss function defined previously via Equation 1. In addition to degeneracy, we furthermore assume that there exists a region of connected degenerate minima in the landscape of the loss function, in particular for overparameterized models. SGD or its variants then selects one of the degenerate solutions. Such degeneracy has been observed empirically where most of the eigenvalues of the loss Hessian from various models and tasks are zero or close to zero (Sagun et al., 2017a;b). Furthermore, the connectedness is further supported by results of mode connectedness (Garipov et al., 2018; Draxler et al., 2018). Nguyen (2019) also shows theoretically that for a class of deep overparameterized neural nets with piecewise linear activation functions, all of the global minima are connected within a unique and potentially very large global valley.
Here we present additional experiments to support the existence of such minimal valleys, i.e. basins of attraction of connected, degenerate minima that can be locally described by Equation 1. These experiments will be also used to analyze and confirm our theory later. We consider classification of CIFAR10 with label-smoothing cross entropy (Szegedy et al., 2016)2, constant learning rate of 0.1, momentum 0.9, batch size 500 and total training epochs 150. The network architectures are Preact-ResNet18 (He et al., 2016) and VGG16 (Simonyan & Zisserman, 2015). The training and validation loss are shown in Figure 1 (inset). After 150 training epochs, the networks fluctuates with small training loss, and the minimal training loss for both case is ∼ 0.005. In order to confirm the existence of minimal valleys, we would like to project each point along the training path to the corresponding closest minimum 3 and check whether the projected path crosses any barriers. To determine whether two minima can be connected by a line segment in a minimal valley implying no barriers in between, we use line interpolation to connect these two minima and measure the training loss along the interpolating line. In practice, we call it line connectedness if we sample evenly 10 points in the line segment between consecutive minima and the training loss is smaller than a threshold for all 10 points.
To project a state to a closest minimum, the model is trained from that state with the same hyperparameters except using GD instead of SGD to remove noise. The stopping criterion for these auxiliary training tasks is that the training loss is less than the minimal training loss, which is ∼ 0.005. Ideally, we would project states after each iteration and check for line connectedness, but this involves large unnecessary calculations. In practice, if state B is obtained from n training iterations after state A, and A and B are not line connected, we insert C which is obtained from n/2 training iterations after A and check for line connectedness between A and C and between C and B. This process stops when there is no extra iteration between A and B or they are line connected. Starting from each pair of consecutive epochs as A and B, we obtain the projected path recursively. The training and validation losses on these two projected paths are shown in Figure 1 (main plot). The left path has 2700 projected points and the right has 4020. We see that after a few epochs, the training loss along the path remains close to zero (and hence minimal), which means that there exists a least one minimal valley connected to the model state found by regular SGD. Also SGD optimization happens within such a valley after the first few epochs. Furthermore, the validation loss varies along the valley.
More experiments can be found in Appendix A.2. Further discussion of violation of this minimal valley assumption can be found in Appendix A.6.
2.1.2 THE HESSIAN-NOISE COVARIANCE ALIGNMENT
Empirical observations have found that the noise covariance matrix is aligned with the Hessian (Zhang et al., 2018; Zhu et al., 2019; Jastrzbski et al., 2018). In fact, it was shown in Hoffer et al. (2017) that
Ci,j = 1
S
( 1− S
M
) 1
M M∑ k=1 ∂θi l(xk,θ)∂θj l(xk,θ) ,
2The label-smoothing technique is used here to make a true minimum of the cost function exist. Regular cross entropy gives similar results.
3We use minimum here but specifically we mean a point with training loss less than a threshold.
where S is the mini-batch size and M is the total number of training samples. Considering negative log-likelihood loss functions, which is also the loss function in our experiments for classification, the loss L can be written as L = − 1M ∑M i=k log f(xk,θ) where f is the output of the network and l(xk,θ) = − log f(xk,θ). Notice that
∂θi∂θjL(θ) = 1
M M∑ k=1 ∂θi l(xk,θ)∂θj l(xk,θ)− 1 M M∑ k=1
1
f(xk,θ) ∂θi∂θjf(xk,θ) , (4)
where the left-hand side is the positive Hessian matrix for i, j ≤ n and the first term on the righthand side is the positive matrix proportional to the noise covariance matrix. Empirically, negative eigenvalues are exponentially small in magnitude compared to major positive ones when a model is close to a minimum (Sagun et al., 2017b) and negative eigenvalues can only come from the contribution of the second term on the right-hand side of equation (4). Therefore unless the second term is extremely asymmetrical, we can assume that its contribution is small compared to the first term. We will return to this in Section 4.1 and show that eigenvalues with small magnitude arise from higher-order curvature in the degenerate space even when the Hessian with respect to the bottom of the valley is characterized by the positive λis for i = 1, ..., n or zero directions. Ignoring the second term on the right hand side, Equation 4 becomes
Ci,j = 1
S
( 1− S
M
) Hi,j . (5)
We emphasize that this assumption will only be valid after the early phase of training, not from initialization. Further discussion and experimental support can be found in Appendix A.4.
2.1.3 TIMESCALE SEPARATION
The final assumption is that the dynamics relaxes to a stationary state in the non-degenerate space θ̄ but not in the degenerate space. This assumption is because there is a timescale separation between the dynamics of θ̄, which relax quickly and the dynamics of θ̂, which evolve much more slowly as the minimal valley is traversed.
Considering a simplified optimization problem where parameter directions undergo independent Levy-driven Ornstein-Uhlenbeck processes4 as for quadratic loss with noise, the decay to a stationary state is exponential with decay rate proportional the corresponding eigenvalues (Abdelrazeq et al., 2014). Since the θ̄ correspond to the leading order eigenvalues, their relaxation time is exponentially shorter than the evolution in the degenerate space. Therefore it is reasonable to assume a stationary state in the non-degenerate space θ̄ when studying the degenerate θ̂ dynamics. Further discussion and experimental support of this assumption using fluctuation-dissipation relations can be found in Appendix A.5.
4Simsekli et al. (2019) proposes to analyze SGD as an SDE driven by a Levy motion.
2.2 PROOF OF THEOREM 1
Proof. The stochastic gradient descent update for θ̂ is θ̂t+1 = θ̂t − η∇̂LB(θt). If we consider the average evolution of the eigenvalues λi for i = 1, ..., n from step t to t + 1, we have
〈[[λi,t+1 − λi,t]]m.b.〉 = 〈[[λi(θ̂t+1)− λi(θ̂t)]]m.b.〉 = 〈[[λi(θ̂t − η∇̂LB(θt))− λi(θ̂t)]]m.b.〉 = −η〈[[∇̂λi(θ̂t)T∇̂LB(θt))]]m.b.〉+O(η2) = −η〈∇̂λi(θ̂t)T∇̂L(θt))〉+O(η2) ,
(6)
where we performed a Taylor expansion from the second to the third equality and the fourth equality holds from equation (2).
Notice that ∇̂L(θ) = 12 ∑n i=1 θ 2 i ∇̂λi(θ̂). By keeping the leading order term, we have
〈[[λi,t+1 − λi,t]]m.b.〉 ' − η
2 n∑ j=1 〈θ2j 〉∇̂λi(θ̂t)T∇̂λj(θ̂t) . (7)
Considering the observables O(θ̄) ≡ θ2i for i = 1, ..., n, the master equation (3) becomes
〈θi∂θiL〉 = η
2 〈C̃i,i〉 .
Notice that ∂θiL = θiλi(θ̂), we thus have
〈θ2i 〉 = η〈C̃i,i〉 2λi(θ̂) . (8)
By the definition of the two-point noise matrix C̃ and the noise covariance matrix C, we also have
C̃i,i = Ci,i + ∂θiL(θ)∂θiL(θ)
= Ci,i + θ 2 i λ 2 i (θ̂)
(9)
Based on the assumption that the noise covariance matrix is aligned with the Hessian and scales as 1 S ( 1− SM ) where S is the mini-batch size and M is the total number of training samples (Hoffer et al., 2017)5, we have
Ci,i = 1
S
( 1− S
M
) λi(θ̂) . (10)
Together with Eq. 8, 9 and 10, we have for i ≤ n,
〈θ2i 〉 = η
S(2− ηλi)
( 1− S
M ) = η
2S
( 1− S
M
) +O(η2)
(11)
Now Eq. 7 becomes
〈[[λi,t+1 − λi,t]]m.b.〉 = − η2
4S
( 1− S
M ) n∑ j=1 ∇̂λi(θ̂t)T∇̂λj(θ̂t) +O(η3)
Next we consider the evolution of the trace of the Hessian T = ∑n i=1 λi. We have
〈[[Tt+1 − Tt]]m.b.〉 = − η2
4S
( 1− S
M ) n∑ i,j=1 ∇̂λi(θ̂t)T∇̂λj(θ̂t) +O(η3)
≈ − η 2
4S
( 1− S
M )∥∥∥ n∑ i=1 ∇̂λi(θ̂) ∥∥∥2
≤ 0 . 5Refer to Assumption 2.1.2 for details.
Thus we have shown that the trace of Hessian decreases on average during SGD optimization. Notice that equality holds when either one of the following conditions is satisfied: first, if S = M so that SGD becomes full-batch GD, and second that ∑n i=1 ∇̂λi(θ̂) = 0, which implies ∇̂T (θ̂) = 0.
3 OPTIMIZATION DYNAMICS WITH NOISE BEYOND SGD
In Section 2, we concluded that, to leading order in the learning rate η, SGD dynamics reduces the Hessian trace. This result originates from the noise introduced by SGD with covariance given by (10). Interestingly, other forms of noise can be introduced into gradient descent in order to decrease other desired quantities instead of the trace. Here we present a theorem that relates the expected decrease of certain functions of the Hessian spectrum to a corresponding noise covariance. And 〈•〉 represents an average over the stationary state and [[•]] averages the quantity over the corresponding noise. Theorem 2. In a minimal valley with loss approximation in Equation 1, assuming that there exists a stationary state in the non-degenerate space θ̄, for any quantity f(λ) satisfying ∂f(λ)∂λi ≥ 0 for i = 1, ..., n, where λ ≡ (λ1, ..., λn)T, if the loss is optimized with gradient descent along with external noise with diagonal covariance matrix
Ci,i = λi ∂f(λ)
∂λi , (12)
we have 〈[[ft+1 − ft]]〉 ≤ 0 ,
where ft represents f(λ) evaluated at training step t. Equality holds when ∇̂f(λ(θ̂)) = 0.
Proof. Similar to the proof of Theorem 1, we have
〈[[ft+1 − ft]]〉 = − η
2 n∑ i=1 ∂f ∂λi n∑ j=1 〈θ2j 〉∇̂λTi ∇̂λj +O(η2) . (13)
With the imposed noise with covariance matrix (12), the master equation (3) and the relation between noise covariance matrix and two-point noise matrix in equation (9), we have
〈θ2i 〉 = η (2− ηλi) ∂f ∂λi
= η
2
∂f ∂λi +O(η2) .
(14)
Plugging in Equation 13 we have
〈[[ft+1 − ft]]〉 = − η2
4 ∥∥∥ n∑ i=1 ∂f ∂λi ∇̂λi(θ̂) ∥∥∥2 +O(η3) ≈ −η 2
4 ∥∥∥ n∑ i=1 ∇̂f(λ(θ̂)) ∥∥∥2
≤ 0 .
(15)
Thus we have shown that f decreases on average during optimization. Equality holds when ∇̂f(λ(θ̂)) = 0. Notice that Theorem 1 is a special case with f(λ) = ∑n i=1 λi.
Furthermore, we have the following corollary: Corollary 2.1 (Determinant-Decreasing Noise). With the same assumptions in Theorem 2, let f(λ) = log ∏n i=1 λi and Ci,i = C where C is a arbitrary positive constant, we have
〈[[Dett+1 −Dett]]〉 ≤ 0 , where Det ≡ ∏n i=1 λi is the non-degenerate determinant.
Corollary 2.1 indicates that the non-degenerate determinant will decrease on average during optimization if we introduce isotropic noise in the non-degenerate space. It is known (Smith & Le, 2018) that the Bayesian evidence, minimization of which minimizes a PAC-Bayes generalization bound, has a contribution from the non-degenerate determinant. This contribution is also called the Occam factor. Our result thus leads to a new algorithm where one introduces isotropic noise into the non-degenerate space. Theorem 2 will allow future work to add designed noise that applies an entropic force on other properties of the Hessian yet to be determined.
4 EXTENDED ANALYSIS AND EXPERIMENTS
4.1 NEGATIVE EIGENVALUES OF THE HESSIAN
Empirically, small negative eigenvalues arise even with small training loss, which seems to indicate that the model is at a saddle point instead of a minimum (Sagun et al., 2017b; Ghorbani et al., 2019). This however is consistent with the loss in Equation 1. The loss is minimal when θi = 0 for i ≤ n. However, when we use a trained instance to estimate the eigenvalues of the Hessian, we only guarantee that θi is close to zero (and thus nonzero training loss) for i ≤ n. Therefore, there could exist negative eigenvalues which originate from the second derivative of θi for i > n. They are originally zero at the minimum with θi = 0 for i ≤ n, and their magnitude is suppressed by θ2i for i ≤ n which is related to the small training loss. In Appendix A.3, we predict that the negative eigenvalues on average are proportional to the training loss. To confirm this, we set up a simple experiment to classify MNIST data with a small training set of 500 randomly selected samples, with a single hidden layer fully-connected network with 6220 parameters and loss given by label-smoothed cross entropy, again to make a genuine minimum. We first obtain a minimum with gradient descent optimization at learning rate 0.1 and 10k training epochs and use this as initialization for later. The model is then further trained from this initialization with different batch sizes from 5 to 250 and learning rates from 0.001 to 0.1 in order to equilibrate to various training losses. Then the negative eigenvalues are calculated and the result is shown in Fig. 2(a). The result shows a linear relationship between training loss and the sum of negative eigenvalues, as predicted.
4.2 THE EVOLUTION OF TRACE AND DETERMINANT WITH A TOY MODEL
In this section, we use two toy models to test our previous theoretical analysis. In Section 2, we showed that the trace evolution rate is proportional to η2. Notice that the two factors of η have different origins. One is from the expansion of λ and the gradient descent step as in Eq. 6. The other contribution is the equilibrium statistics in the non-degenerate directions as in Eq. 8. Furthermore, the coefficient of the proportionality relation also depends on ∇̂λ. Therefore to test this prediction for how learning rate affects the trace evolution, we train a model multiple times with different ηs. When doing this, the initialization for each run is in equilibrium in the non-degenerate space, and we choose a loss function for which ∇̂λ is constant. We design a simple four-dimensional toy model with the loss L(θ) = |θ3 + θ4|θ21 + |θ3 + 2θ4|θ22 to test this effect. We initialize the model to make sure that |θ3 + θ4| and |θ3 + 2θ4| don’t change sign during a few iterations of training. For each η, we first train for 1000 iterations with SGD noise to equilibrate the model and calculate the trace. Then the model is trained for another 1000 iterations and we calculate the trace evolution. The result is shown in Fig 2(b), which shows a linear relation as predicted.
Next, we measure the evolution of the Hessian trace and determinant after introducing noise with different covariance structure. Our theory predicts that SGD noise decreases the trace while isotropic noise in the non-degenerate subspace decreases the determinant. To test this, we design a toy model where the behavior of these two quantities is anti-correlated. The loss can be written as L(θ) = (4− 2e−θ23−θ24 )θ21 + e−(θ3−θ4) 2
θ22 . We train the model with the same initialization and with the two different noise structures. The result is shown in Fig. 2(c)-(d), consistent with our analysis.
4.3 TRACE EVOLUTION ALONG PROJECTED PATH
In Section 2, we presented two experiments to support the existence of a minimal valley. Recall that the projected paths follow along the bottom of the valley associated with the SGD dynamics. Our theory predicts that the Hessian trace along the projected paths should decrease during optimization. To confirm this, we use method in Bai et al. (1996) to estimate the Hessian trace and the results are shown in Figure 3, in which the values and error bars are calculated by the mean and standard deviation of 10 different initializations. As seen in the figure, the Hessian trace indeed decreases on average in both cases, which is consistent with our theoretical prediction. Notice that test performance does not monotonically improve during optimization. Indeed the trace continues decreasing slightly even when validation loss goes up, indicating that the trace itself is not sufficient to describe generalization.
5 CONCLUSION
In this paper, we showed that within a minimal valley, the trace of the loss function Hessian decreases on average. We also demonstrated how to design the covariance structure of added noise during optimization to affect other properties of the Hessian’s evolution, opening the door to new algorithms. The analysis in this paper can be extended and generalized to models trained on much larger data sets.
A APPENDIX
A.1 PROOF OF LEMMA 1
To study the dynamics of SGD, we investigate the model behavior starting from θt to θt+1 by one-step training and assume that θt+1 is close to θt. In this subsection, we derive the property of the loss function in a small neighbourhood, B(θt), of θt, which includes θt+1. We consider the model in a minimal valley with a setM of minima and define a projection P that maps any point θ in B(θt) to the minimum inM that is closest to θ, i.e.
P(θ) ≡ argmin θ∗∈M
‖θ − θ∗‖2 . (16)
BecauseM is connected by definition of minimal valley, we assume that P is also continuous in B(θt). We denote the dimension of whole parameter space andM by N and N − n, respectively. We can then make a quadratic approximation to the loss function in B(θt),
L(θ) = L∗ + 1
2 (θ − P(θ))TH(P(θ))(θ − P(θ)) , (17)
where we expand the loss function at the minimum that is closest to θ, and H(P(θ)) is the Hessian depending on the position P(θ) in the valley. L∗ is a constant by definition of a minimal valley and will be ignored in the following derivations.
We define a minimal path to be a smooth path in a minimal valley where every point on the path is a minimum. For any path passing through a minimum θ∗, we have the tangent line of the minimal path at θ∗ being in the null space of the Hessian at θ∗ by definition. Precisely, for an arbitrary minimal path θ∗(µ) : [0, 1]→M, we have
H(θ∗(µ)) dθ∗(µ)
du = 0 . (18)
Next we have the following lemma,
Lemma 2. Let J(θ) be the N ×N Jacobian matrix of function P at θ defined as Jij = dP(θ)idθj . We have
H(P(θ))J(θ) = 0 . (19)
Proof. Notice that θ∗(µ) ≡ P(θ + µ∆θ) forms a minimal path through θ where µ ∈ [0, 1] and ∆θ is an arbitrary direction in the parameter space. We have
dθ∗(µ)
du ∣∣∣ µ=0 = J(θ)∆θ . (20)
By Eq. 18, we have H(P(θ))J(θ)∆θ = 0 . (21)
Since ∆θ is arbitrary, we have Eq. 19.
Next we consider the diagonalization of the Hessian as H(P(θ)) = V (P(θ))TS(P(θ))V (P(θ)) , (22)
where S(P(θ)) = diag(λ1(P(θ)), ..., λn(P(θ)), 0, ..., 0) and λi > 0 for 1 ≤ i ≤ n. As suggested by Gur-Ari et al. (2018), we assume that the eigenspace change is negligible in the small neighbourhood B(θt) and denote constant orthogonal matrix V ≡ V (P(θt)) ≈ V (P(θ)). We then perform the coordinate transformation
φ = V θ , (23) and the loss function becomes
L(φ) = 1
2 n∑ i=1 (φi − P̃(φ)i)2λi(P̃(φ)) , (24)
where P̃(φ) is the same projection map as P(θ) but in the coordinate of φ. Notice here that P̃(φ) is the projection to the minimum with the least L-2 distance to φ, because the coordinate transformation is orthogonal and distance preserved. We have
P̃(φ) = V P(θ) . (25) We have the following theorem, Proposition 3. P̃(φ)i is constant for i ≤ n.
Proof. In φ-coordinate, by Eq. 23 and 25 the Jacobian of P̃(φ), J̃(φ)ij ≡ dP̃(φ)idφj , is related to J(θ) as J̃(φ) = V J(θ)V T . (26) Therefore Lemma 2 becomes S(P(θ))J̃(φ) = 0 . (27) Notice that [S(P(θ))J̃(φ)]ij = ∑n k=1 λkδikJ̃(φ)kj where δik is the Kronecker delta function, we have [S(P(θ))J̃(φ)]ij = λiJ̃(φ)ij , i ≤ n . (28) From Eq. 27 and λi > 0, we have J̃(φ)ij = 0 , i ≤ n . (29)
Notice that J̃(φ)ij ≡ dP̃(φ)idφj = 0 for i ≤ n and all j, we have that P̃(φ)i is constant for i ≤ n.
We denote P̃(φ)i as φ∗i for i ≤ n. Therefore we can define a global translation vector φ∗ = (φ∗1, ..., φ ∗ n, 0, ..., 0) T and consider a new set of coordinate ψ = φ− φ∗ and have
L(ψ) = 1
2 n∑ i=1 ψ2i λi(P̄(ψ)) , (30)
where P̄(ψ) is also the projection to the minimum with the least L-2 distance to ψ. Clearly we have the set of minima of the loss function 30 as {ψ|ψi = 0, i ≤ n} and the closest minimum to ψ in the L-2 distance is ψ∗ = (0, ..., 0, ψn+1, ..., ψN )T. Therefore we have
P̄(ψ) = (0, ..., 0, ψn+1, ..., ψN )T , (31) and
L(ψ) = 1
2 n∑ i=1 ψ2i λi(ψn+1, ..., ψN ) , (32)
Notice that the coordinate transformations from θ to ψ only involve orthogonal rotation and translation, the SGD rule is conservative. To see this, we first have
∇θ = V T∇φ = V T∇ψ . (33) And the SGD on θ becomes
θt+1 = θt − ηV T∇ψL(ψ) ⇒ φt+1 = φt − η∇ψL(ψ) ⇒ ψt+1 = ψt − η∇ψL(ψ) .
(34)
In the main text, we replace the notation ψ with θ for symbol simplicity.
A.2 ADDITIONAL EXPERIMENTS
To further support our assumption of connected minima, we conducted extra experiments including DenseNet (Huang et al., 2017) on CIFAR100, and explored the landscape by introducing randomized labels on a subset of the training data to initialize subsequent training on purely clean data (details in Section A.2.2).
A.2.1 DENSENET ON CIFAR100
We trained a DenseNet with batch normalization (number of blocks = [6,12,24,16] and growth rate=12) on CIFAR100 and followed the same pipeline as described in the main text experiments. The learning rate is 0.1 and the batch size is 512. The result is shown in Figure 4(a) which supports the connectedness assumption as well as the result of the Hessian trace decreasing under SGD.
A.2.2 4-LAYER CONVNET ON MNIST AND RESNET18 ON CIFAR10 WITH RANDOMIZED LABELS
The setup of these experiments is as follows: we randomly split the training set into two sets, A and B. For MNIST and CIFAR10 we let the number of samples in A be 40000. Then we shuffle the labels in set B. For the rest of this section, we call A the training set and B the shuffle set. The loss minimum, Hessian trace, etc. are all with respective to the training set A.
First we will obtain minima with worse generalization than the minima found by regular SGD. To do so, we train the model with the union of the training set A and shuffle set B until convergence. Then we further train the model with only A using gradient descent so the model reaches a minimum. This minimum corresponds to the left end point of the path in Figure 4 (b) and (c).
We then train the model from this minimum with training set A alone using SGD. We find that the model starts to diffuse and validation loss becomes smaller even though it is at minimum already. Using the same method described in main text, we project this path to minimal and the results are shown in Figure 4 (b) and (c).
In both cases, we found that minima found by SGD are connected, and we further estimated the trace along both paths and found that it is decreasing on average. These results support the connectedness and degeneracy assumption and our result of SGD decreasing Hessian on average. It also illustrates how SGD helps model explore the loss landscape and escape bad minima.
A.3 THE SUM OF NEGATIVE EIGENVALUES
In fact, if we consider an equilibrium of θi for i ≤ n, we have 〈L〉 = L∗ + C ∑n i=1 λi and
〈∂j∂kL〉 = C ∑n i=1 ∂j∂kλi for j, k > n from Eq. 11, where C = η 4S (1− S M ). Thus as we change learning rate or batch size, and hence C, 〈∂j∂kL〉 ∝ 〈L〉 −L∗ for states nearby in the minimal valley because both are proportional to C and ∑n i=1 λi and ∂j∂kλi are approximately constant for states nearby in the minimal valley. Usually in deep neural networks training loss can go to nearly zero with a large number of training iterations and small learning rate (Zhang et al., 2017; Du et al., 2019), so we ignore L∗ and have that 〈∂j∂kL〉 ∝ 〈L〉. This result is tested and verified in Figure 2(a).
A.4 EXPERIMENTS ON THE HESSIAN-NOISE COVARIANCE ALIGNMENT ASSUMPTION
To further support our assumption of the Hessian-noise covariance alignment, we trained 2-layer neural networks on the CIFAR10 dataset with label-smoothed cross-entropy loss and mean-squared loss (MSE) and calculated the cosine similarity between the Hessian of the loss (H) and the noise covariance (C).
The cosine similarity between two matrices A and B is defined as
CS(A,B) = Tr(ABT )√
Tr(AAT )Tr(BBT ) , (35)
where Tr is the trace operator. It is easy to show that A and B are aligned as A = kB, k > 0 if and only if CS(A,B) = 1.
In order to make calculate of the Hessian and covariance tractable, the networks were chosen to have one hidden layer with 70 nodes, and the dataset is transformed into grey scale and resized to 7× 7 pixels and normalized. The network with label-smoothed cross-entropy loss is trained with SGD optimization with learning rate 0.001 and batch size 512 and the network with mean-squared loss is trained with SGD optimization with learning rate 0.1 and batch size 512.
As shown in Figure 5, in both cases the cosine similarity between the Hessian and noise covariance is larger than 0.9 after several epochs of initial training. Notice that two random matrices with the same size as H and C in these experiments are almost orthogonal with the cosine similarity of 0. These experiments are consistent with our assumption of Hessian-noise covariance alignment–after the early phase of training–in the main text.
A.5 EXPERIMENT ON THE TIMESCALE SEPARATION ASSUMPTION
In this section, we provide support for our assumption of timescale separation through experiments with a small convolutional neural network trained on CIFAR10. In order to observe the relaxation to steady-state, we adopt the method in Yaida (2019), which shows that
〈θ · ∇L〉 = η(1 + µ) 2(1− ν) 〈v2〉 (FDR)
is satisfied when the model is at stationarity, where µ and ν are the momentum and dampening in SGD optimization, respectively, and v is the velocity. It is easy to show that
〈P(θ) · P(∇L)〉 = η(1 + µ) 2(1− ν) 〈P(v)2〉 , (FDR’)
when there exists a stationary state in a subspace S and P is the projection operator to S. We use OL and OR to denote the quantities of the left and right side respectively of FDR and FDR’. Therefore the quantity |ŌL/ŌR − 1| can be used as a criterion to verify how close the model is to a stationary state in a specific subspace.
To show the existence of timescale separation, we train a small CNN on CIFAR10. This network has 3 convolutional layers with kernel size of 3 and output channel number of 15, 20, 10 respectively. Throughout training, we estimate |ŌL/ŌR − 1| on the raw parameter space and top-1 eigenspace of the Hessian. The result is shown in Figure 6.
Figure 6 shows that the criterion in top-1 eigenvalue decays much faster than in the raw parameter space. After several epochs, |ŌL/ŌR − 1| in top-1 subspace is already less then 10−2 which is a threshold set in Yaida (2019) to indicate sufficient closeness to a stationary state, while the state in the whole parameter space is still far from equilibrium after 50 epoch of training. This experiment fully supports our assumption of timescale separation in the main text.
A.6 EXTENSION TO NON-DEGENERATE LANDSCAPE
This section explores the situation where the minimal valley assumption is slightly violated, in which case the bottom of the landscape is not completely degenerate but has a small slope. The loss function 1 can then be written as
L(θ) = L∗ + 1
2 n∑ i=1 θ2i λi(θn+1, ..., θN ) + g(θn+1, ..., θN ) , (36)
where the function g describes how the bottom of the valley varies in the previously degenerate directions. Following the same proof as in Section 2.2, it is easy to obtain
〈[[Tt+1 − Tt]]m.b.〉 ≈ − η2
4S
( 1− S
M )∥∥∥ n∑ i=1 ∇̂λi(θ̂) ∥∥∥2 − η n∑ i=1 ∇̂λi(θ̂)∇̂g(θ̂) . (37)
At the end of training, the model reaches equilibrium, where
〈[[Tt+1 − Tt]]m.b.〉 = 0 .
Thus, from Equation 37 we have
∇̂g(θ̂) n∑ i=1 ∇̂λi(θ̂) = − η 4S ( 1− S M )∥∥∥ n∑ i=1 ∇̂λi(θ̂) ∥∥∥2 (38)
This reveals explicitly the effect of SGD noise as an entropic force during training. The training of gradient descent and its variant drives the model to states with θ̄ = 0 and ∇̂g(θ̂) = 0 according to the energy function 36. This exert energetic force to the model moving forward to the lowest energy (smallest loss). However when training with noise, ∇̂g(θ̂) = 0 may not be satisfied when the locations with smallest trace and ∇̂g(θ̂) = 0 do not coincide because the right hand side of Equation 38 is always greater or equal to zero. That being said the model is pushed away from ∇̂g(θ̂) = 0 as a entropic competitor to the energetic force.
We use a toy model to visualize this effect. Consider a loss function ofL(θ1, θ2) = 12 (θ 2 2+1)θ 2 1+α∗θ2. We train this function by GD and SGD with α = 10−4, learning rate η = 0.01, noise strength 1 S − 1 M = 1 and initialization (θ1, θ2) = (0.1, 0.1). The result is shown in Figure 7. The left column corresponds to training with gradient descent where θ1 drops to zero very fast and θ2 continues decreasing to negative infinity. As when noise is introduced, θ2 stops decrease after equilibrium between entropic and energetic force. From 38 it is easy to show that 〈θ2〉 = −0.02, which is consistent with the right column of Figure 7. | 1. What is the main contribution of the paper regarding SGD's ability to decrease the trace of the Hessian of the loss?
2. What are the concerns regarding the theoretical part of the paper, specifically with regards to assumptions and lack of proper backing up of claims?
3. How does the reviewer assess the experimental results in the paper, and what kind of empirical evidence would they like to see to support the claims?
4. Are there any specific references or citations that the reviewer would like the authors to include to strengthen their arguments?
5. What is the significance of the topic addressed in the paper in machine learning, and how does it relate to other works in the field? | Review | Review
This paper proves that under certain conditions, SGD on average decreases the
trace of the Hessian of the loss. The paper is overall clear and the topic addressed in the paper is relevant in machine learning but I don’t think this paper is ready for publication. There are numerous assumptions that are made in the paper, but not properly stated or backed up. In my view, the experimental results are also not sufficient to compensate for the shortcomings of the theoretical part.
Lemma 1
This is a corollary of Morse lemma, which is also used in Dauphin et al: https://arxiv.org/pdf/1406.2572.pdf (see Equation 1). I don’t see why you would need to re-derive it in your paper and most importantly this should be clearly stated in your paper.
Page 2: existence stationary state?
What are the conditions for the existence of a stationary state? Is bounded noise sufficient?
Equation 4 and local approximation
Regarding the standard decomposition of the Hessian in eq 4 (sometimes referred to as Gauss-Newton decomposition), the discussion is not precise. The authors claim “Empirically, negative eigenvalues are exponentially small in magnitude compared to major positive ones when a model is close to a minimum“
But clearly from the formula itself, one needs to differentiate between minima with low and high function values. For local minima that are high in the energy landscape, the second term might have a large function value. If of course all **depends on the size of the approximation region**. This is extremely important and it is not properly characterized in the paper. The authors simply claim that the loss can be locally approximated while I think these assumptions should be clearly stated. Note that similar types of analyses are usually done in dynamical systems where the behavior of a system is linearized around a critical point. In some cases, one can however characterize the size of a basin of attraction, see e.g. the book Nonlinear Systems (3rd Edition): Hassan K. Khalil.
Assumption on timescale separation
This section makes numerous claims that are not properly backed up.
1) “This assumption is because there is a timescale separation between
the dynamics of \theta_bar, which relax quickly and the dynamics of \theta_hat, which evolve much more slowly as the minimal valley is traversed.“
Are you claiming the absolute value of the eigenvalue of the non-degenerate space are much smaller than the degenerate space? Why would that be so?
2) Ornstein-Uhlenbeck: OU processes require the noise to be Brownian motion. This assumption needs to be clearly stated. Note that there is actually evidence that the noise of SGD is heavy-tail:
Simsekli, Umut, Levent Sagun, and Mert Gurbuzbalaban. "A tail-index analysis of stochastic gradient noise in deep neural networks." arXiv preprint arXiv:1901.06053 (2019).
Take away
I am unsure what the added value of this paper is. Second-order methods can already be shown to decrease the maximum eigenvalue, see e.g. convergence results derived in Cubic regularization of Newton method and its global performance by Nesterov and Polyak. These methods have also been analyzed in stochastic settings where similar convergence results hold as well. What particular insight do we gain from the results of Theorem 2? The authors claim this could “potentially improve generalization” but this is not justified and no reference is cited.
“This indicates that the trace itself is not sufficient to describe generalization (Neyshabur et al., 2017).“
I do not see what aspect of Neyshabur et al. justifies your claim, please explain.
Experiments
All the experiments performed in the paper are on very small models. Given the rather strong assumptions made in the paper, I feel that the paper should provide stronger empirical evidence to back up their claims. |
ICLR | Title
How noise affects the Hessian spectrum in overparameterized neural networks
Abstract
Stochastic gradient descent (SGD) forms the core optimization method for deep neural networks. While some theoretical progress has been made, it still remains unclear why SGD leads the learning dynamics in overparameterized networks to solutions that generalize well. Here we show that for overparameterized networks with a degenerate valley in their loss landscape, SGD on average decreases the trace of the Hessian of the loss. We also generalize this result to other noise structures and show that isotropic noise in the non-degenerate subspace of the Hessian decreases its determinant. In addition to explaining SGDs role in sculpting the Hessian spectrum, this opens the door to new optimization approaches that may confer better generalization performance. We test our results with experiments on toy models and deep neural networks.
1 INTRODUCTION
Deep neural networks have achieved remarkable success in the past decade on tasks that were out of reach prior to the era of deep learning. Yet fundamental questions remain regarding the strong performance of over-parameterized models and optimization schemes that typically involve only first-order information, such as stochastic gradient descent (SGD) and its variants.
Regarding generalization, it has been noted that flat minima with small curvature tend to generalize better than sharp minima (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017). This has been argued by nonvacuous PAC-Bayes bounds (Dziugaite & Roy, 2017) and Bayesian evidence (Smith & Le, 2018). But why does SGD bias learning towards flat minima? One possible explanation was proposed by Zhang et al. (2018) in terms of energy-entropy competition. Jastrzbski et al. (2018) also suggests that isotropic noise in SGD helps networks escape sharp minima with large Hessian determinant. However, previous theoretical analyses assume that minima are isolated and non-singular. In contrast, Sagun et al. (2017b) finds that most of the eigenvalues of the Hessian of the loss function at a minimum are close to zero, indicating highly degenerate minima. The degeneracy is further supported by results on mode connectedness (Garipov et al., 2018; Draxler et al., 2018). Furthermore, it is shown that minima found from different initializations and the same optimization scheme are connected with essentially no barriers between them. Nguyen (2019) also shows theoretically that for a class of deep overparameterized neural nets with piecewise linear activation functions, all of the global minima are connected within a unique and potentially very large global valley.
In this paper, we prove that for models whose loss has a ”minimal valley” structure, defined below, optimization via SGD decreases the trace of Hessian of the loss, by utilizing recent fluctuationdissipation relations (Yaida, 2019). Furthermore, we derive the noise covariance matrix that would result in the reduction of other potentially desired quantities such as the Hessian determinant, leading towards the design of new optimization algorithms. We present experiments on toy models and deep neural networks to confirm our predictions.
2 MAIN THEOREM
In this section, we present our main theorem - how noise during optimization affects the Hessian of the loss function when the loss landscape locally takes the shape of a degenerate valley. More specifically, let N and n be the dimension of the total parameter space and non-degenerate space, respectively, and
consider a general loss function L(w) wherew ∈ RN are the model parameters.1 Since the curvature around the minima can vary, we approximate the loss function in such a valley around a degenerate minimum with a modified quadratic form L(w) = L∗ + 12 (w − P(w))
TH(P(w))(w − P(w)) where P is a function that projects a point w in parameter space to the nearest minimum, and H is the Hessian, a function of the location of the projected minimum. We have the following lemma: Lemma 1. For an arbitrary point w and its neighborhood in the valley, there exists an orthogonal transformation Q and a translation vector v such that the loss function in the new coordinate system θ = Qw − v has the following form,
L(θ) = L∗ + 1
2 n∑ i=1 θ2i λi(θn+1, ..., θN ) , (1)
where λis are the positive eigenvalues of the loss function Hessian for the non-degenerate space and depend on the position in the degenerate space. Also, note that the gradient descent equation is invariant under this transformation.
A detailed, constructive proof of this lemma can be found in Appendix A.1. Notice that this is a generalization of Morse’s Lemma (Callahan, 2010). In the rest of this section, we denote the nondegenerate and degenerate subspaces by θ̄ = (θ1, ..., θn)T and θ̂ = (θn+1, ..., θN )T, respectively. Similarly, the gradients of θ̄ and θ̂ are denoted by ∇̄ and ∇̂, respectively. Notice that at a minimum where θ̄ = 0, the λis are the only nonzero Hessian eigenvalues.
Next we provide a quick review of the relevant fluctuation-dissipation relation formalism for stochastic gradient descent (Yaida, 2019). We denote the loss for a random batchB of training sample as LB(θ). Clearly we have [[∇LB(θ)]]m.b. = ∇L(θ) , (2) where [[•]]m.b. represents the average over all mini-batch realizations. If there exists a stationary state for θ̄, pss(θ̄) and the stationary-average is defined as 〈O(θ̄)〉 ≡ ∫ dθ̄pss(θ̄)O(θ̄) where O(θ̄) is any observable of θ̄, it is straightforward to derive the following master equation:
〈O(θ̄)〉 = 〈[[O[θ̄ − η∇̄LB(θ)]]]m.b.〉 . (3)
We denote the two-point noise matrix as C̃i,j(θ) ≡ [[∂θiLB(θ)∂θjLB(θ)]]m.b. and the noise covariance matrix Ci,j(θ) ≡ C̃i,j(θ)− ∂θiL(θ)∂θjL(θ). Now we present our main theorem: Theorem 1. When the loss can be locally approximated as in Equation 1, assuming that the nondegenerate space θ̄ is in a stationary state at time t and that the noise covariance matrix is aligned with the Hessian, we have 〈[[Tt+1 − Tt]]m.b.〉 ≤ 0 , where T = ∑n i=1 λi is the trace of the Hessian and Tt represents the trace at training step t. Equality holds when∇LB(θ) = ∇L(θ) or ∇̂T (θ̂) = 0.
Theorem 1 indicates that the change of the Hessian trace during SGD optimization is non-positive on average. A trivial condition for equality is optimization with gradient descent (GD) instead of its stochastic variant because the stationary state for noiseless optimization is a constant state with vanishing gradient. An alternate condition for equality is that the trace function of θ̄ reaches a minimum or saddle point.
Before proving Theorem 1, we will first analyze the assumptions made. We emphasize at the outset that we expect our assumptions to be valid only after an early phase of training corresponding to ∼ 10 epochs in our experiments.
2.1 EVIDENCE FOR THE ASSUMPTIONS MADE
In this subsection, we will analyze the assumptions in Theorem 1 and present some experiments that support their validity.
1Here the non-degenerate space corresponds to the subspace of the Hessian with non-zero eigenvalues. Note that we do not require any specific relationship between n and N , e.g. n N .
2.1.1 MINIMAL VALLEY
Our theory relies on the existence of a degenerate valley in the loss function defined previously via Equation 1. In addition to degeneracy, we furthermore assume that there exists a region of connected degenerate minima in the landscape of the loss function, in particular for overparameterized models. SGD or its variants then selects one of the degenerate solutions. Such degeneracy has been observed empirically where most of the eigenvalues of the loss Hessian from various models and tasks are zero or close to zero (Sagun et al., 2017a;b). Furthermore, the connectedness is further supported by results of mode connectedness (Garipov et al., 2018; Draxler et al., 2018). Nguyen (2019) also shows theoretically that for a class of deep overparameterized neural nets with piecewise linear activation functions, all of the global minima are connected within a unique and potentially very large global valley.
Here we present additional experiments to support the existence of such minimal valleys, i.e. basins of attraction of connected, degenerate minima that can be locally described by Equation 1. These experiments will be also used to analyze and confirm our theory later. We consider classification of CIFAR10 with label-smoothing cross entropy (Szegedy et al., 2016)2, constant learning rate of 0.1, momentum 0.9, batch size 500 and total training epochs 150. The network architectures are Preact-ResNet18 (He et al., 2016) and VGG16 (Simonyan & Zisserman, 2015). The training and validation loss are shown in Figure 1 (inset). After 150 training epochs, the networks fluctuates with small training loss, and the minimal training loss for both case is ∼ 0.005. In order to confirm the existence of minimal valleys, we would like to project each point along the training path to the corresponding closest minimum 3 and check whether the projected path crosses any barriers. To determine whether two minima can be connected by a line segment in a minimal valley implying no barriers in between, we use line interpolation to connect these two minima and measure the training loss along the interpolating line. In practice, we call it line connectedness if we sample evenly 10 points in the line segment between consecutive minima and the training loss is smaller than a threshold for all 10 points.
To project a state to a closest minimum, the model is trained from that state with the same hyperparameters except using GD instead of SGD to remove noise. The stopping criterion for these auxiliary training tasks is that the training loss is less than the minimal training loss, which is ∼ 0.005. Ideally, we would project states after each iteration and check for line connectedness, but this involves large unnecessary calculations. In practice, if state B is obtained from n training iterations after state A, and A and B are not line connected, we insert C which is obtained from n/2 training iterations after A and check for line connectedness between A and C and between C and B. This process stops when there is no extra iteration between A and B or they are line connected. Starting from each pair of consecutive epochs as A and B, we obtain the projected path recursively. The training and validation losses on these two projected paths are shown in Figure 1 (main plot). The left path has 2700 projected points and the right has 4020. We see that after a few epochs, the training loss along the path remains close to zero (and hence minimal), which means that there exists a least one minimal valley connected to the model state found by regular SGD. Also SGD optimization happens within such a valley after the first few epochs. Furthermore, the validation loss varies along the valley.
More experiments can be found in Appendix A.2. Further discussion of violation of this minimal valley assumption can be found in Appendix A.6.
2.1.2 THE HESSIAN-NOISE COVARIANCE ALIGNMENT
Empirical observations have found that the noise covariance matrix is aligned with the Hessian (Zhang et al., 2018; Zhu et al., 2019; Jastrzbski et al., 2018). In fact, it was shown in Hoffer et al. (2017) that
Ci,j = 1
S
( 1− S
M
) 1
M M∑ k=1 ∂θi l(xk,θ)∂θj l(xk,θ) ,
2The label-smoothing technique is used here to make a true minimum of the cost function exist. Regular cross entropy gives similar results.
3We use minimum here but specifically we mean a point with training loss less than a threshold.
where S is the mini-batch size and M is the total number of training samples. Considering negative log-likelihood loss functions, which is also the loss function in our experiments for classification, the loss L can be written as L = − 1M ∑M i=k log f(xk,θ) where f is the output of the network and l(xk,θ) = − log f(xk,θ). Notice that
∂θi∂θjL(θ) = 1
M M∑ k=1 ∂θi l(xk,θ)∂θj l(xk,θ)− 1 M M∑ k=1
1
f(xk,θ) ∂θi∂θjf(xk,θ) , (4)
where the left-hand side is the positive Hessian matrix for i, j ≤ n and the first term on the righthand side is the positive matrix proportional to the noise covariance matrix. Empirically, negative eigenvalues are exponentially small in magnitude compared to major positive ones when a model is close to a minimum (Sagun et al., 2017b) and negative eigenvalues can only come from the contribution of the second term on the right-hand side of equation (4). Therefore unless the second term is extremely asymmetrical, we can assume that its contribution is small compared to the first term. We will return to this in Section 4.1 and show that eigenvalues with small magnitude arise from higher-order curvature in the degenerate space even when the Hessian with respect to the bottom of the valley is characterized by the positive λis for i = 1, ..., n or zero directions. Ignoring the second term on the right hand side, Equation 4 becomes
Ci,j = 1
S
( 1− S
M
) Hi,j . (5)
We emphasize that this assumption will only be valid after the early phase of training, not from initialization. Further discussion and experimental support can be found in Appendix A.4.
2.1.3 TIMESCALE SEPARATION
The final assumption is that the dynamics relaxes to a stationary state in the non-degenerate space θ̄ but not in the degenerate space. This assumption is because there is a timescale separation between the dynamics of θ̄, which relax quickly and the dynamics of θ̂, which evolve much more slowly as the minimal valley is traversed.
Considering a simplified optimization problem where parameter directions undergo independent Levy-driven Ornstein-Uhlenbeck processes4 as for quadratic loss with noise, the decay to a stationary state is exponential with decay rate proportional the corresponding eigenvalues (Abdelrazeq et al., 2014). Since the θ̄ correspond to the leading order eigenvalues, their relaxation time is exponentially shorter than the evolution in the degenerate space. Therefore it is reasonable to assume a stationary state in the non-degenerate space θ̄ when studying the degenerate θ̂ dynamics. Further discussion and experimental support of this assumption using fluctuation-dissipation relations can be found in Appendix A.5.
4Simsekli et al. (2019) proposes to analyze SGD as an SDE driven by a Levy motion.
2.2 PROOF OF THEOREM 1
Proof. The stochastic gradient descent update for θ̂ is θ̂t+1 = θ̂t − η∇̂LB(θt). If we consider the average evolution of the eigenvalues λi for i = 1, ..., n from step t to t + 1, we have
〈[[λi,t+1 − λi,t]]m.b.〉 = 〈[[λi(θ̂t+1)− λi(θ̂t)]]m.b.〉 = 〈[[λi(θ̂t − η∇̂LB(θt))− λi(θ̂t)]]m.b.〉 = −η〈[[∇̂λi(θ̂t)T∇̂LB(θt))]]m.b.〉+O(η2) = −η〈∇̂λi(θ̂t)T∇̂L(θt))〉+O(η2) ,
(6)
where we performed a Taylor expansion from the second to the third equality and the fourth equality holds from equation (2).
Notice that ∇̂L(θ) = 12 ∑n i=1 θ 2 i ∇̂λi(θ̂). By keeping the leading order term, we have
〈[[λi,t+1 − λi,t]]m.b.〉 ' − η
2 n∑ j=1 〈θ2j 〉∇̂λi(θ̂t)T∇̂λj(θ̂t) . (7)
Considering the observables O(θ̄) ≡ θ2i for i = 1, ..., n, the master equation (3) becomes
〈θi∂θiL〉 = η
2 〈C̃i,i〉 .
Notice that ∂θiL = θiλi(θ̂), we thus have
〈θ2i 〉 = η〈C̃i,i〉 2λi(θ̂) . (8)
By the definition of the two-point noise matrix C̃ and the noise covariance matrix C, we also have
C̃i,i = Ci,i + ∂θiL(θ)∂θiL(θ)
= Ci,i + θ 2 i λ 2 i (θ̂)
(9)
Based on the assumption that the noise covariance matrix is aligned with the Hessian and scales as 1 S ( 1− SM ) where S is the mini-batch size and M is the total number of training samples (Hoffer et al., 2017)5, we have
Ci,i = 1
S
( 1− S
M
) λi(θ̂) . (10)
Together with Eq. 8, 9 and 10, we have for i ≤ n,
〈θ2i 〉 = η
S(2− ηλi)
( 1− S
M ) = η
2S
( 1− S
M
) +O(η2)
(11)
Now Eq. 7 becomes
〈[[λi,t+1 − λi,t]]m.b.〉 = − η2
4S
( 1− S
M ) n∑ j=1 ∇̂λi(θ̂t)T∇̂λj(θ̂t) +O(η3)
Next we consider the evolution of the trace of the Hessian T = ∑n i=1 λi. We have
〈[[Tt+1 − Tt]]m.b.〉 = − η2
4S
( 1− S
M ) n∑ i,j=1 ∇̂λi(θ̂t)T∇̂λj(θ̂t) +O(η3)
≈ − η 2
4S
( 1− S
M )∥∥∥ n∑ i=1 ∇̂λi(θ̂) ∥∥∥2
≤ 0 . 5Refer to Assumption 2.1.2 for details.
Thus we have shown that the trace of Hessian decreases on average during SGD optimization. Notice that equality holds when either one of the following conditions is satisfied: first, if S = M so that SGD becomes full-batch GD, and second that ∑n i=1 ∇̂λi(θ̂) = 0, which implies ∇̂T (θ̂) = 0.
3 OPTIMIZATION DYNAMICS WITH NOISE BEYOND SGD
In Section 2, we concluded that, to leading order in the learning rate η, SGD dynamics reduces the Hessian trace. This result originates from the noise introduced by SGD with covariance given by (10). Interestingly, other forms of noise can be introduced into gradient descent in order to decrease other desired quantities instead of the trace. Here we present a theorem that relates the expected decrease of certain functions of the Hessian spectrum to a corresponding noise covariance. And 〈•〉 represents an average over the stationary state and [[•]] averages the quantity over the corresponding noise. Theorem 2. In a minimal valley with loss approximation in Equation 1, assuming that there exists a stationary state in the non-degenerate space θ̄, for any quantity f(λ) satisfying ∂f(λ)∂λi ≥ 0 for i = 1, ..., n, where λ ≡ (λ1, ..., λn)T, if the loss is optimized with gradient descent along with external noise with diagonal covariance matrix
Ci,i = λi ∂f(λ)
∂λi , (12)
we have 〈[[ft+1 − ft]]〉 ≤ 0 ,
where ft represents f(λ) evaluated at training step t. Equality holds when ∇̂f(λ(θ̂)) = 0.
Proof. Similar to the proof of Theorem 1, we have
〈[[ft+1 − ft]]〉 = − η
2 n∑ i=1 ∂f ∂λi n∑ j=1 〈θ2j 〉∇̂λTi ∇̂λj +O(η2) . (13)
With the imposed noise with covariance matrix (12), the master equation (3) and the relation between noise covariance matrix and two-point noise matrix in equation (9), we have
〈θ2i 〉 = η (2− ηλi) ∂f ∂λi
= η
2
∂f ∂λi +O(η2) .
(14)
Plugging in Equation 13 we have
〈[[ft+1 − ft]]〉 = − η2
4 ∥∥∥ n∑ i=1 ∂f ∂λi ∇̂λi(θ̂) ∥∥∥2 +O(η3) ≈ −η 2
4 ∥∥∥ n∑ i=1 ∇̂f(λ(θ̂)) ∥∥∥2
≤ 0 .
(15)
Thus we have shown that f decreases on average during optimization. Equality holds when ∇̂f(λ(θ̂)) = 0. Notice that Theorem 1 is a special case with f(λ) = ∑n i=1 λi.
Furthermore, we have the following corollary: Corollary 2.1 (Determinant-Decreasing Noise). With the same assumptions in Theorem 2, let f(λ) = log ∏n i=1 λi and Ci,i = C where C is a arbitrary positive constant, we have
〈[[Dett+1 −Dett]]〉 ≤ 0 , where Det ≡ ∏n i=1 λi is the non-degenerate determinant.
Corollary 2.1 indicates that the non-degenerate determinant will decrease on average during optimization if we introduce isotropic noise in the non-degenerate space. It is known (Smith & Le, 2018) that the Bayesian evidence, minimization of which minimizes a PAC-Bayes generalization bound, has a contribution from the non-degenerate determinant. This contribution is also called the Occam factor. Our result thus leads to a new algorithm where one introduces isotropic noise into the non-degenerate space. Theorem 2 will allow future work to add designed noise that applies an entropic force on other properties of the Hessian yet to be determined.
4 EXTENDED ANALYSIS AND EXPERIMENTS
4.1 NEGATIVE EIGENVALUES OF THE HESSIAN
Empirically, small negative eigenvalues arise even with small training loss, which seems to indicate that the model is at a saddle point instead of a minimum (Sagun et al., 2017b; Ghorbani et al., 2019). This however is consistent with the loss in Equation 1. The loss is minimal when θi = 0 for i ≤ n. However, when we use a trained instance to estimate the eigenvalues of the Hessian, we only guarantee that θi is close to zero (and thus nonzero training loss) for i ≤ n. Therefore, there could exist negative eigenvalues which originate from the second derivative of θi for i > n. They are originally zero at the minimum with θi = 0 for i ≤ n, and their magnitude is suppressed by θ2i for i ≤ n which is related to the small training loss. In Appendix A.3, we predict that the negative eigenvalues on average are proportional to the training loss. To confirm this, we set up a simple experiment to classify MNIST data with a small training set of 500 randomly selected samples, with a single hidden layer fully-connected network with 6220 parameters and loss given by label-smoothed cross entropy, again to make a genuine minimum. We first obtain a minimum with gradient descent optimization at learning rate 0.1 and 10k training epochs and use this as initialization for later. The model is then further trained from this initialization with different batch sizes from 5 to 250 and learning rates from 0.001 to 0.1 in order to equilibrate to various training losses. Then the negative eigenvalues are calculated and the result is shown in Fig. 2(a). The result shows a linear relationship between training loss and the sum of negative eigenvalues, as predicted.
4.2 THE EVOLUTION OF TRACE AND DETERMINANT WITH A TOY MODEL
In this section, we use two toy models to test our previous theoretical analysis. In Section 2, we showed that the trace evolution rate is proportional to η2. Notice that the two factors of η have different origins. One is from the expansion of λ and the gradient descent step as in Eq. 6. The other contribution is the equilibrium statistics in the non-degenerate directions as in Eq. 8. Furthermore, the coefficient of the proportionality relation also depends on ∇̂λ. Therefore to test this prediction for how learning rate affects the trace evolution, we train a model multiple times with different ηs. When doing this, the initialization for each run is in equilibrium in the non-degenerate space, and we choose a loss function for which ∇̂λ is constant. We design a simple four-dimensional toy model with the loss L(θ) = |θ3 + θ4|θ21 + |θ3 + 2θ4|θ22 to test this effect. We initialize the model to make sure that |θ3 + θ4| and |θ3 + 2θ4| don’t change sign during a few iterations of training. For each η, we first train for 1000 iterations with SGD noise to equilibrate the model and calculate the trace. Then the model is trained for another 1000 iterations and we calculate the trace evolution. The result is shown in Fig 2(b), which shows a linear relation as predicted.
Next, we measure the evolution of the Hessian trace and determinant after introducing noise with different covariance structure. Our theory predicts that SGD noise decreases the trace while isotropic noise in the non-degenerate subspace decreases the determinant. To test this, we design a toy model where the behavior of these two quantities is anti-correlated. The loss can be written as L(θ) = (4− 2e−θ23−θ24 )θ21 + e−(θ3−θ4) 2
θ22 . We train the model with the same initialization and with the two different noise structures. The result is shown in Fig. 2(c)-(d), consistent with our analysis.
4.3 TRACE EVOLUTION ALONG PROJECTED PATH
In Section 2, we presented two experiments to support the existence of a minimal valley. Recall that the projected paths follow along the bottom of the valley associated with the SGD dynamics. Our theory predicts that the Hessian trace along the projected paths should decrease during optimization. To confirm this, we use method in Bai et al. (1996) to estimate the Hessian trace and the results are shown in Figure 3, in which the values and error bars are calculated by the mean and standard deviation of 10 different initializations. As seen in the figure, the Hessian trace indeed decreases on average in both cases, which is consistent with our theoretical prediction. Notice that test performance does not monotonically improve during optimization. Indeed the trace continues decreasing slightly even when validation loss goes up, indicating that the trace itself is not sufficient to describe generalization.
5 CONCLUSION
In this paper, we showed that within a minimal valley, the trace of the loss function Hessian decreases on average. We also demonstrated how to design the covariance structure of added noise during optimization to affect other properties of the Hessian’s evolution, opening the door to new algorithms. The analysis in this paper can be extended and generalized to models trained on much larger data sets.
A APPENDIX
A.1 PROOF OF LEMMA 1
To study the dynamics of SGD, we investigate the model behavior starting from θt to θt+1 by one-step training and assume that θt+1 is close to θt. In this subsection, we derive the property of the loss function in a small neighbourhood, B(θt), of θt, which includes θt+1. We consider the model in a minimal valley with a setM of minima and define a projection P that maps any point θ in B(θt) to the minimum inM that is closest to θ, i.e.
P(θ) ≡ argmin θ∗∈M
‖θ − θ∗‖2 . (16)
BecauseM is connected by definition of minimal valley, we assume that P is also continuous in B(θt). We denote the dimension of whole parameter space andM by N and N − n, respectively. We can then make a quadratic approximation to the loss function in B(θt),
L(θ) = L∗ + 1
2 (θ − P(θ))TH(P(θ))(θ − P(θ)) , (17)
where we expand the loss function at the minimum that is closest to θ, and H(P(θ)) is the Hessian depending on the position P(θ) in the valley. L∗ is a constant by definition of a minimal valley and will be ignored in the following derivations.
We define a minimal path to be a smooth path in a minimal valley where every point on the path is a minimum. For any path passing through a minimum θ∗, we have the tangent line of the minimal path at θ∗ being in the null space of the Hessian at θ∗ by definition. Precisely, for an arbitrary minimal path θ∗(µ) : [0, 1]→M, we have
H(θ∗(µ)) dθ∗(µ)
du = 0 . (18)
Next we have the following lemma,
Lemma 2. Let J(θ) be the N ×N Jacobian matrix of function P at θ defined as Jij = dP(θ)idθj . We have
H(P(θ))J(θ) = 0 . (19)
Proof. Notice that θ∗(µ) ≡ P(θ + µ∆θ) forms a minimal path through θ where µ ∈ [0, 1] and ∆θ is an arbitrary direction in the parameter space. We have
dθ∗(µ)
du ∣∣∣ µ=0 = J(θ)∆θ . (20)
By Eq. 18, we have H(P(θ))J(θ)∆θ = 0 . (21)
Since ∆θ is arbitrary, we have Eq. 19.
Next we consider the diagonalization of the Hessian as H(P(θ)) = V (P(θ))TS(P(θ))V (P(θ)) , (22)
where S(P(θ)) = diag(λ1(P(θ)), ..., λn(P(θ)), 0, ..., 0) and λi > 0 for 1 ≤ i ≤ n. As suggested by Gur-Ari et al. (2018), we assume that the eigenspace change is negligible in the small neighbourhood B(θt) and denote constant orthogonal matrix V ≡ V (P(θt)) ≈ V (P(θ)). We then perform the coordinate transformation
φ = V θ , (23) and the loss function becomes
L(φ) = 1
2 n∑ i=1 (φi − P̃(φ)i)2λi(P̃(φ)) , (24)
where P̃(φ) is the same projection map as P(θ) but in the coordinate of φ. Notice here that P̃(φ) is the projection to the minimum with the least L-2 distance to φ, because the coordinate transformation is orthogonal and distance preserved. We have
P̃(φ) = V P(θ) . (25) We have the following theorem, Proposition 3. P̃(φ)i is constant for i ≤ n.
Proof. In φ-coordinate, by Eq. 23 and 25 the Jacobian of P̃(φ), J̃(φ)ij ≡ dP̃(φ)idφj , is related to J(θ) as J̃(φ) = V J(θ)V T . (26) Therefore Lemma 2 becomes S(P(θ))J̃(φ) = 0 . (27) Notice that [S(P(θ))J̃(φ)]ij = ∑n k=1 λkδikJ̃(φ)kj where δik is the Kronecker delta function, we have [S(P(θ))J̃(φ)]ij = λiJ̃(φ)ij , i ≤ n . (28) From Eq. 27 and λi > 0, we have J̃(φ)ij = 0 , i ≤ n . (29)
Notice that J̃(φ)ij ≡ dP̃(φ)idφj = 0 for i ≤ n and all j, we have that P̃(φ)i is constant for i ≤ n.
We denote P̃(φ)i as φ∗i for i ≤ n. Therefore we can define a global translation vector φ∗ = (φ∗1, ..., φ ∗ n, 0, ..., 0) T and consider a new set of coordinate ψ = φ− φ∗ and have
L(ψ) = 1
2 n∑ i=1 ψ2i λi(P̄(ψ)) , (30)
where P̄(ψ) is also the projection to the minimum with the least L-2 distance to ψ. Clearly we have the set of minima of the loss function 30 as {ψ|ψi = 0, i ≤ n} and the closest minimum to ψ in the L-2 distance is ψ∗ = (0, ..., 0, ψn+1, ..., ψN )T. Therefore we have
P̄(ψ) = (0, ..., 0, ψn+1, ..., ψN )T , (31) and
L(ψ) = 1
2 n∑ i=1 ψ2i λi(ψn+1, ..., ψN ) , (32)
Notice that the coordinate transformations from θ to ψ only involve orthogonal rotation and translation, the SGD rule is conservative. To see this, we first have
∇θ = V T∇φ = V T∇ψ . (33) And the SGD on θ becomes
θt+1 = θt − ηV T∇ψL(ψ) ⇒ φt+1 = φt − η∇ψL(ψ) ⇒ ψt+1 = ψt − η∇ψL(ψ) .
(34)
In the main text, we replace the notation ψ with θ for symbol simplicity.
A.2 ADDITIONAL EXPERIMENTS
To further support our assumption of connected minima, we conducted extra experiments including DenseNet (Huang et al., 2017) on CIFAR100, and explored the landscape by introducing randomized labels on a subset of the training data to initialize subsequent training on purely clean data (details in Section A.2.2).
A.2.1 DENSENET ON CIFAR100
We trained a DenseNet with batch normalization (number of blocks = [6,12,24,16] and growth rate=12) on CIFAR100 and followed the same pipeline as described in the main text experiments. The learning rate is 0.1 and the batch size is 512. The result is shown in Figure 4(a) which supports the connectedness assumption as well as the result of the Hessian trace decreasing under SGD.
A.2.2 4-LAYER CONVNET ON MNIST AND RESNET18 ON CIFAR10 WITH RANDOMIZED LABELS
The setup of these experiments is as follows: we randomly split the training set into two sets, A and B. For MNIST and CIFAR10 we let the number of samples in A be 40000. Then we shuffle the labels in set B. For the rest of this section, we call A the training set and B the shuffle set. The loss minimum, Hessian trace, etc. are all with respective to the training set A.
First we will obtain minima with worse generalization than the minima found by regular SGD. To do so, we train the model with the union of the training set A and shuffle set B until convergence. Then we further train the model with only A using gradient descent so the model reaches a minimum. This minimum corresponds to the left end point of the path in Figure 4 (b) and (c).
We then train the model from this minimum with training set A alone using SGD. We find that the model starts to diffuse and validation loss becomes smaller even though it is at minimum already. Using the same method described in main text, we project this path to minimal and the results are shown in Figure 4 (b) and (c).
In both cases, we found that minima found by SGD are connected, and we further estimated the trace along both paths and found that it is decreasing on average. These results support the connectedness and degeneracy assumption and our result of SGD decreasing Hessian on average. It also illustrates how SGD helps model explore the loss landscape and escape bad minima.
A.3 THE SUM OF NEGATIVE EIGENVALUES
In fact, if we consider an equilibrium of θi for i ≤ n, we have 〈L〉 = L∗ + C ∑n i=1 λi and
〈∂j∂kL〉 = C ∑n i=1 ∂j∂kλi for j, k > n from Eq. 11, where C = η 4S (1− S M ). Thus as we change learning rate or batch size, and hence C, 〈∂j∂kL〉 ∝ 〈L〉 −L∗ for states nearby in the minimal valley because both are proportional to C and ∑n i=1 λi and ∂j∂kλi are approximately constant for states nearby in the minimal valley. Usually in deep neural networks training loss can go to nearly zero with a large number of training iterations and small learning rate (Zhang et al., 2017; Du et al., 2019), so we ignore L∗ and have that 〈∂j∂kL〉 ∝ 〈L〉. This result is tested and verified in Figure 2(a).
A.4 EXPERIMENTS ON THE HESSIAN-NOISE COVARIANCE ALIGNMENT ASSUMPTION
To further support our assumption of the Hessian-noise covariance alignment, we trained 2-layer neural networks on the CIFAR10 dataset with label-smoothed cross-entropy loss and mean-squared loss (MSE) and calculated the cosine similarity between the Hessian of the loss (H) and the noise covariance (C).
The cosine similarity between two matrices A and B is defined as
CS(A,B) = Tr(ABT )√
Tr(AAT )Tr(BBT ) , (35)
where Tr is the trace operator. It is easy to show that A and B are aligned as A = kB, k > 0 if and only if CS(A,B) = 1.
In order to make calculate of the Hessian and covariance tractable, the networks were chosen to have one hidden layer with 70 nodes, and the dataset is transformed into grey scale and resized to 7× 7 pixels and normalized. The network with label-smoothed cross-entropy loss is trained with SGD optimization with learning rate 0.001 and batch size 512 and the network with mean-squared loss is trained with SGD optimization with learning rate 0.1 and batch size 512.
As shown in Figure 5, in both cases the cosine similarity between the Hessian and noise covariance is larger than 0.9 after several epochs of initial training. Notice that two random matrices with the same size as H and C in these experiments are almost orthogonal with the cosine similarity of 0. These experiments are consistent with our assumption of Hessian-noise covariance alignment–after the early phase of training–in the main text.
A.5 EXPERIMENT ON THE TIMESCALE SEPARATION ASSUMPTION
In this section, we provide support for our assumption of timescale separation through experiments with a small convolutional neural network trained on CIFAR10. In order to observe the relaxation to steady-state, we adopt the method in Yaida (2019), which shows that
〈θ · ∇L〉 = η(1 + µ) 2(1− ν) 〈v2〉 (FDR)
is satisfied when the model is at stationarity, where µ and ν are the momentum and dampening in SGD optimization, respectively, and v is the velocity. It is easy to show that
〈P(θ) · P(∇L)〉 = η(1 + µ) 2(1− ν) 〈P(v)2〉 , (FDR’)
when there exists a stationary state in a subspace S and P is the projection operator to S. We use OL and OR to denote the quantities of the left and right side respectively of FDR and FDR’. Therefore the quantity |ŌL/ŌR − 1| can be used as a criterion to verify how close the model is to a stationary state in a specific subspace.
To show the existence of timescale separation, we train a small CNN on CIFAR10. This network has 3 convolutional layers with kernel size of 3 and output channel number of 15, 20, 10 respectively. Throughout training, we estimate |ŌL/ŌR − 1| on the raw parameter space and top-1 eigenspace of the Hessian. The result is shown in Figure 6.
Figure 6 shows that the criterion in top-1 eigenvalue decays much faster than in the raw parameter space. After several epochs, |ŌL/ŌR − 1| in top-1 subspace is already less then 10−2 which is a threshold set in Yaida (2019) to indicate sufficient closeness to a stationary state, while the state in the whole parameter space is still far from equilibrium after 50 epoch of training. This experiment fully supports our assumption of timescale separation in the main text.
A.6 EXTENSION TO NON-DEGENERATE LANDSCAPE
This section explores the situation where the minimal valley assumption is slightly violated, in which case the bottom of the landscape is not completely degenerate but has a small slope. The loss function 1 can then be written as
L(θ) = L∗ + 1
2 n∑ i=1 θ2i λi(θn+1, ..., θN ) + g(θn+1, ..., θN ) , (36)
where the function g describes how the bottom of the valley varies in the previously degenerate directions. Following the same proof as in Section 2.2, it is easy to obtain
〈[[Tt+1 − Tt]]m.b.〉 ≈ − η2
4S
( 1− S
M )∥∥∥ n∑ i=1 ∇̂λi(θ̂) ∥∥∥2 − η n∑ i=1 ∇̂λi(θ̂)∇̂g(θ̂) . (37)
At the end of training, the model reaches equilibrium, where
〈[[Tt+1 − Tt]]m.b.〉 = 0 .
Thus, from Equation 37 we have
∇̂g(θ̂) n∑ i=1 ∇̂λi(θ̂) = − η 4S ( 1− S M )∥∥∥ n∑ i=1 ∇̂λi(θ̂) ∥∥∥2 (38)
This reveals explicitly the effect of SGD noise as an entropic force during training. The training of gradient descent and its variant drives the model to states with θ̄ = 0 and ∇̂g(θ̂) = 0 according to the energy function 36. This exert energetic force to the model moving forward to the lowest energy (smallest loss). However when training with noise, ∇̂g(θ̂) = 0 may not be satisfied when the locations with smallest trace and ∇̂g(θ̂) = 0 do not coincide because the right hand side of Equation 38 is always greater or equal to zero. That being said the model is pushed away from ∇̂g(θ̂) = 0 as a entropic competitor to the energetic force.
We use a toy model to visualize this effect. Consider a loss function ofL(θ1, θ2) = 12 (θ 2 2+1)θ 2 1+α∗θ2. We train this function by GD and SGD with α = 10−4, learning rate η = 0.01, noise strength 1 S − 1 M = 1 and initialization (θ1, θ2) = (0.1, 0.1). The result is shown in Figure 7. The left column corresponds to training with gradient descent where θ1 drops to zero very fast and θ2 continues decreasing to negative infinity. As when noise is introduced, θ2 stops decrease after equilibrium between entropic and energetic force. From 38 it is easy to show that 〈θ2〉 = −0.02, which is consistent with the right column of Figure 7. | 1. What is the focus of the paper regarding SGD's behavior after reaching a minimal valley?
2. What are the strengths and weaknesses of the paper's theoretical analysis and empirical examples?
3. Do you have any questions or comments regarding the paper's assumptions, experimental designs, and conclusions? | Review | Review
This paper studies the behavior of SGD after it has reached a steady state and settled in a minimal valley. The authors show that even if SGD has reached a minimal valley, the noisy updates provided by a minibatch of samples result in the trace of the Hessians reducing with further SGD iterations. The authors also show that by changing the type of noise that is added to GD, one can get different functions of the Hessian to reduce with SGD iterations. The theoretical conclusions are then verified empirically with synthetic examples and real deep learning examples
In my opinion this is an interesting paper and research direction that helps us understand how SGD biases solutions towards "flatter" minima as measured by the trace norm of the Hessian of the loss function. I have a few concerns though:
1. Is the steady state assumption valid for neural network training? Especially when training on exponential losses (like cross entropy) which drives the parameters towards large norm solutions? This seems to be an important yet unsupported assumption in the analysis.
2. Does the Hessian-noise covariance alignment exist for squared loss functions as well? Is this specific to log-likelihood models?
3. In the experiments to delineate how different types of noise can result in regularization of different quantities, is there a reason only a synthetic example is used? Can this be replicated for deep networks, or are the two quantities (trace vs determinant) closely related? It would be nice to see how this behavior extends to deep networks.
Additional questions/comments:
1. While overparameterization seems to be important to the analysis (the parameters move in the degenerate subspace but not in the non-degenerate one), is there anyway one can amend this framework to analyze different levels of overparameterization? Is there any difference in the rates of decrease in the trace norms for highly overparameterized models vs lightly overparameterized ones?
2. This paper talks about how SGD has a bias towards flatter minima. Is there a reason to prefer flatter solutions over sharper ones to get better generalization? Can one prove this connection?
3. In the Densenet experiments in Figure 4a, it seems as though this relationship between flatness and generalization might not hold? There are points along the optimization trajectory where the validation loss seems to be better than at the last few points along the optimization path. However at the former set of points the trace norm of the hessian is larger than at the latter points. Is this from a stray experiment, or are the plots averaged over a number of different runs? Please add these details as well as other details such as learning rates, criteria to decide when steady state was reached, whether or not batch norm was used, etc.
4. The Fluctuation-Dissipation Relations need to be explained more clearly. For people unfamiliar with statistical physics (and Yaida 2019) the notation and formulation is not immediately clear and that makes the paper harder to read.
Overall I believe this paper explains an important phenomenon. I am willing to update my score if my concerns are addressed/if there is any misunderstanding. |
ICLR | Title
Practical Integration via Separable Bijective Networks
Abstract
Neural networks have enabled learning over examples that contain thousands of dimensions. However, most of these models are limited to training and evaluating on a finite collection of points and do not consider the hypervolume in which the data resides. Any analysis of the model’s local or global behavior is therefore limited to very expensive or imprecise estimators. We propose to formulate neural networks as a composition of a bijective (flow) network followed by a learnable, separable network. This construction allows for learning (or assessing) over full hypervolumes with precise estimators at tractable computational cost via integration over the input space. We develop the necessary machinery, propose several practical integrals to use during training, and demonstrate their utility.
1 INTRODUCTION
Most supervised learning problems operate by training a model on a finite collection, T , of N (typically paired) examples, (x, y). The model is updated by comparing its predicted output to the expected output and performing some flavor of stochastic gradient descent based on the comparison and various regularizers. The process is repeated by reexamining the elements of T in a random order, possibly with augmentation, until the model parameters converge or an iteration budget is exceeded. This relatively simple procedure has proven to be remarkably effective in a variety of domains and these models have begun to permeate every aspect of modern science and everyday life (He et al., 2016; Silver et al., 2017; Brown et al., 2020).
The deep learning revolution has also resulted in highly effective generative models such as VAEs (Kingma & Welling, 2014), GANs (Goodfellow et al., 2014), and tractable likelihood models (Dinh et al., 2017; Oliva et al., 2018; Grathwohl et al., 2019). These models are largely used to create novel samples of impressive quality. In addition to sampling, likelihood models provide an estimate of the probability density function of the data which can be used for additional, downstream processes.
We augment the training process by constructing neural networks that allow for tractable integration over the input domain. This differs from implicit layers which utilize integration over a parameterized variable (Chen et al., 2018; Grathwohl et al., 2019). Access to fast and differentiable integrals allows us to regularize a model’s average behavior using metrics that may not be available otherwise. Integration over the input space also allows us to supervise how the model behaves in continuous regions that are not directly observed in T and may even be out-of-distribution (OOD). Alternative methods attempt to supervise examples outside of T by performing random perturbations (Gutmann & Hyvärinen, 2010), along the line between known examples (Zhang et al., 2018), or via a generative process (Zenati et al., 2018; Akcay et al., 2018). However, these methods are only capable of observing a small quantity of the total space. By integrating over entire regions, it is possible to observe a large portion of the space based on statistical relevance.
Main Contributions We propose a general architecture that enables tractable integration over the input space, enabling supervision and custom regularization over continuous regions. We demonstrate how to construct this network and how it allows for a reduction in the computation cost required for dense numeric integration from exponential in the number of dimensions to linear. We derive several useful integrated formulations over continuous regions. Finally, we explore the impact of this architecture and regularizers on the standard accuracy and robustness to OOD examples on several standard classification datasets. The code utilized in this paper can be found at https://github.com/lupalab/sep_bij_nets.
Notation Throughout this work, we consider theM dimensional input features, x = [x1, ..., xM ] ∈ RM ; the latent features, z ∈ Z ⊆ RM ; and the K-wise classification probability, y. The input features x the training set, Tx, are drawn from the in-distribution data, D ⊆ RM . Subscripts represent a particular dimension, e.g., Dm corresponds to the mth dimension of the space. Paranthetical superscripts represent the subspace corresponding to a particular class, e.g., D(c) is the subset of D where the data belongs to class c. The bijective network is given as h such that h : D → Z . Probability distributions overD and Z are given by p with the corresponding subscript. Classification networks are given as f and g. Variables with a “hat,” ŷ, are predictions of the true quantity, y.
2 MOTIVATION
Neural networks are highly effective function approximators between two (typically) continuous spaces: f : X → Y . However, networks are typically trained and evaluated using a finite collection of points without any explicit assessment of the complete hypervolume spanned by the data. This omission is understandable from an implementation perspective as the number of samples required to obtain a reasonable estimate over a volume scales exponentially with data dimensionality. However, human beings often have an understanding of how a process should behave on average. Ideally, we would like to embed this intuition into the model but currently cannot assess the average performance of a trained model outside of the held-out test set. Specifically, we would like to regularize the model by estimating the expected behavior of some metric, Ω, produced by the model over the training data
Ex∼p(x) [Ω(ŷ(x))] = ∫ X Ω(ŷ(x))p(x)dx. (1)
There are many useful choices of Ω over a variety of applications. If it is known what the model output should be on average (ȳ), we can construct Ω to encourage that behavior, e.g., Ω(ŷ) = (ȳ− ŷ)2. Minimizing consistency metrics (Xie et al., 2020) are a common method to improve learning in label-starved problems. These encourage the model to produce similar outputs over neighborhoods around (labeled) examples from Tx where neighborhoods are created by random or domain-specific augmentations. This process can be viewed as an approximation to an integral over the neighborhood,
E ∼p( ) [L(y, ŷ(x + ))] = ∫ L(y, ŷ(x + ))p( )d (2)
where L is a distance-like metric, and is the neighborhood. Equation 2 can be generalized to other neighborhoods. We can recast the standard classification problem as a discrete approximation to an integral. Typically, we minimize the cross-entropy between ŷ(x; θ) and y over the model parameters, θ, for all (x, y) ∈ T which becomes an integral over class-conditioned distributions, p(x|c),
min θ − ∑ x,y∈T ∑ k yk log (ŷk(x; θ))⇒ min θ − ∑ k ∫ D(k) yk log (ŷk(x; θ)) p(x|k)dx. (3)
Unfortunately, integration in high dimension is difficult. Naive gridded solutions require an exponential number of points with error decreasing as O(G−M ), for G, M -dimensional points. Monte Carlo (MC) methods theoretically have better performance with error that decreases asO(G−1/2). However, the rate of convergence for MC methods depends on the variance of samples (Veach, 1998), which may make for poor approximations in practice. Importance sampling (Bishop, 2006) can improve the performance of Monte Carlo methods to adapt to the regions with the greatest contribution.
We choose to model the data using a separable function. Separable functions have the key benefit of decomposing M -dimensional integrals into a combination of M one-dimensional integrals. Each
of these one-dimensional integrals can then be solved using any number of highly-accurate solvers (e.g., Runge-Kutta (Shampine, 2005), Dormand-Prince (Dormand & Prince, 1980), Tsit (Tsitouras, 2011), etc.) that have error rates better than O(G−4) but are unavailable in high dimensions. The use of separable functions is a component of the VEGAS algorithm (Peter Lepage, 1978) and is utilized in conjunction with adaptive Monte Carlo sampling to approximate high-dimensional integrals.
The use of a separable function over the input space may make estimation of integrals over the model more accessible; however, they impose a strong, inappropriate inductive-bias. The obvious approach of utilizing a standard neural network as a feature extractor and integrating over learned features means that we would no longer have a valid integrator over the input space. We propose to solve this problem by utilizing bijective transforms prior to the separable network. The bijective transform allows us to decouple the data into a latent space where the data can be modeled using a separable network and guarantees equality between integrals in the latent space and integrals in the input space.
3 BACKGROUND
We perform integration over the input space by splitting neural network models down into two key components: (1) a bijective feature extractor, (2) a separable task network, see Fig. 1. For simplicity, we only consider classification tasks in this work. This makes our total network analogous with the common architecture where a classifier, often a linear layer or an MLP, is constructed on a feature extractor, such as a CNN. Unlike the typical process, we must place constraints on both networks so that we can integrate over the input domain. This network breakdown is similar to hybrid networks (Chen et al., 2019; Nalisnick et al., 2019b) except for the separability requirement on the classifier.
3.1 BIJECTIVE NETWORKS
Bijective networks are the key component in flow-based likelihood models. A bijective network, h : D → Z , has a known forward and inverse operation so that data can exactly be reconstructed after the transformation. This allows for exact likelihood estimation via the change of variables formula:
z = h(x; θ), x = h−1(z; θ), pX(x) = pZ(h(x; θ)) ∣∣∣∣∂h∂x ∣∣∣∣ (4)
where pZ is a predefined distribution over the latent space, often a standard Gaussian.
These models are trained via Eq. 4 to maximize the likelihood of the examples in T . Once trained, flow-based likelihood models are commonly used as a generative process where samples are drawn from pZ and are then inverted through h to arrive at an example in D. Instead, we will take advantage of the fact that we can choose pZ and then utilize it in downstream tasks. Normalizing flow bijectors provide rich feature extractors that can represent a distribution of complicated inputs with simplydistributed, independent features, z. Given the independence and expressibility of these learned independent features, we build estimators using separable functions over z, which enables us to integrate over the data’s domain while retaining expressibility.
The requirement for bijectivity places a strong constraint on network design, eliminating many common choices due to the need to maintain dimension or invert element-wise activations. Even naive convolutional operations become unavailable since they are not generally invertible. Modern advances have demonstrated methods to work around these limitations through the use of clever partitioning and coupling tricks Dinh et al. (2017) or the use of constraints Chen et al. (2019). However, the field of learnable bijective functions is less advanced then its unconstrained counterparts which results in reduced performance on auxiliary tasks. We utilize Glow Kingma & Dhariwal (2018) to process image data and continuous normalizing flows Grathwohl et al. (2019) for tabular data.
3.2 SEPARABLE FUNCTIONS
Separable functions have long been used in mathematics and physics to solve simple partial differential equations such as the homogeneous wave and diffusion equations (Strauss, 2007). We consider two types of separable functions, additive and multiplicative. All proofs can be found in Appendix A.
3.2.1 ADDITIVELY SEPARABLE FUNCTIONS
Definition 3.1. A function, f : CM → CK , is additively separable if it is composed as a summation of element-wise functions operating independently on each dimension of the input:
f (v) = M∑ m=1 fm (vm;φm) (5)
Theorem 3.1 (Additive Independent Integration). Given an additively separable function, f(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [f (v)] = M∑ m=1 ∫ Dvm fm (vm) pm (vm) dvm (6)
3.2.2 MULTIPLICATIVELY SEPARABLE FUNCTIONS
Definition 3.2. A function, g : CM → CK , is multiplicatively separable if it is composed as a product of element-wise functions operating independently on each dimension of the input:
g (v) = M∏ m=1 gm (vm;ψm) (7)
Theorem 3.2 (Multiplicative Independent Integration). Given a multiplicitively separable function, g(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [g (v)] = M∏ m=1 ∫ Dvm gm (vm) pm (vm) dvm (8)
4 METHOD
Both forms of separable functions allow us to decompose a single M -dimensional integral into M 1-dimensional integrals. A dense estimation of the integral without taking advantage of a separable function usingG points per dimension would requireO(GM ) network evaluations. This is completely impractical for modern datasets where M is at least on the order of hundreds and G should be as large as possible. Exploiting separable functions allow us to reduce the complexity to O(GM). However, it is unlikely that we could construct a separable function directly on the input space and achieve reasonable performance. Doing so would essentially require that each input dimension contribute to the final output independently of the others. Instead, we can combine Eq. 4 and Eq. 6 (alternatively, Eq. 8) to perform practical integration over the input domain while still allowing for inter-dependent contributions from each input dimension. To do so, we first let the latent distribution, pZ , be independently (but not necessarily identically) distributed: pZ(z) = ∏ m pm(zm). This allows us to write the integral over the input space in terms of the latent space and then simplify via Eq. 6.∫ D f(h(x))pZ(h(x)) ∣∣∣∣∂h∂x ∣∣∣∣ dx = Ex∼pX(x) [f(h(x))]
= Ez∼pZ(z) [f(z)] = ∫ Z pZ(z)f(z)dz = M∑ m=1 ∫ Zm fm(zm) pm (zm) dzm (9)
Each 1-dimensional integral can be easily approximated using a variety of integration approaches. In addition to the requirements placed on the feature extractor and task network, this formulation also
requires that we define the integration domain in the latent space as a Cartesian product of domains over each latent dimension. This may seem like a strong requirement since we apparently lose some level of interpretability; however, there are several advantages to defining the input domain in this learned space. Most notably, we know the data distribution in this space exactly and can tune the domain based on the goal of a particular integral.
Figure 2 contains a cartoon example that demonstrates how these methods combine to allow for integration over the complex data space. Both plots show the value of f ; Fig. 2a shows the nonseparable function in the input space and Fig. 2b shows the same data in the (separable) latent space, after the bijective transformation. The colored points and dotted lines are in correspondence between the two spaces and illustrate how the space is warped by the bijector to create a separable, independent latent space. The light gray lines represent the contours of the underlying probability distribution. We define both the integration domain and perform the integration in the latent space. For simplicity, we illustrate the integration using five, equally-spaced points (in both dimensions). The naive procedure would require sampling f at each point in the grid (the blue points). The use of the separable functions allow us to integrate using only the red points and get the same estimate.
5 PRACTICAL APPLICATIONS
In this section we discuss specific choices made when constructing the latent distribution, separable networks, and various integrals. For classification tasks, it is not possible to parameterize the normalized network output of class probabilities, ŷ, as a separable network since each prediction must sum to one. However, it is possible to parameterize each unnormalized logits as a separable network, e.g., ŷ(x) = σ (f(h(x))), where σ is the softmax operator and the output of f are the unnormalized logits. While this limits what quantities can be integrated over exactly, it is still possible to integrate over approximations/bounds without resorting to brute-force methods.
Out-of-Distribution Detection We construct a global integration regularizer to provide resilience to out-of-distribution (OOD) examples. We enable OOD detection by introducing a “reject” option (Hendrickx et al., 2021) into the classification vector, increasing the number of classes by one, where the additional class indicates that the instance is OOD. An example is considered OOD if ŷK+1 exceeds a predefined threshold. We supervise the model over out-of-distribution examples by integrating the cross-entropy over the contrastive probability distribution, q(z) (see Sec. 5.3.1).
Semi-supervised Learning We build a local integral to enhance consistency within latent neighborhoods and utilize it in place of pre-chosen augmentation strategies to inform a label-starved dataset. Specifically, we introduce a loss where all examples near a real, labeled example are regularized to share the real example’s label as in Eq. 2 (see Sec. 5.3.2). We additionally use pseudo-labels Lee (2013) so that consistency is maintained about unlabelled points as the model grows more confident.
5.1 LATENT DISTRIBUTIONS
A critical component of this method is that the likelihoods over the latent space must be independent: pZ(z) = ∏ m pZ(zm). As this is extremely unlikely to occur in the input space, we learn a bijective (flow) transformation from the input space to a latent space where the likelihood can be decomposed into independent components of our choice. Since we would like to use the latent features to discriminate between classes using a separable function, we choose to utilize a (bimodal) Gaussian mixture model. This allows the model to put
different classes into one of two “buckets” per feature and enables classification through multiple features. This results in an exponential number of components (with respect to dimension) in the full latent space, e.g., 2M components. However, we can easily offset this by choosing some dimensions to use a unimodal latent distribution while the remaining dimensions are still bimodal:
pZm(zm) =
{ 0.5 N (−µ, σ21) + 0.5 N (µ, σ21), if m ≤ L
N (0, σ22), else (10)
In addition to the typical latent data distribution, pZ(z), we explicitly include a contrastive distribution, q(z). This distribution is critical if we desire to regularize our model outside the data distribution, i.e., setting some background/default behavior. When using a bimodal data distribution, we additionally desire that latent vectors are more likely under this distribution once the vector is sufficiently far away from any single in-distribution mode. Therefore, we select this distribution so that it fully encapsulates all the components of the data distribution, e.g., qm(zm) > pZm(zm) for | |zm|−µ| > γ. When the data distribution is unimodal we choose pm = qm so that the feature is uninformative.
We utilize a standard Gaussian for the contrastive distribution and set µ to 1 and experiment with σ1 ≤ 0.5. This choice allows us to formulate OOD examples that exist between the in-distribution data as near zero, and outside the in-distribution data, near the tails. Figure 3 illustrates the data and contrastive distributions for a latent feature for convenience.
5.2 SEPARABLE NETWORKS
We explore three different formulations of separable networks. The first formulation learns a distinct quadratic function, the second learns a symmetric hinge, and the third learns a multi-layer perceptron. These functions have distinct parameterizations for each latent feature and each in-distribution class. The out-of-distribution logit is held fixed at zero: lK+1(x) = 0.
f (quad)k,m (zm;αk,m, uk,m, νk,m) = αk,m − (zm − uk,m)2
2e2νk,m (11)
f (hinge)k,m (zm;αk,m, uk,m, νk,m) = αk,m − |zm − uk,m| e νk,m (12)
where k is the logit class index and m is the feature dimension so that the total unnormalized logit, lk(z), is lk(z) = ∑ m fk,m(zm). The separable MLP is constructed by concatenating a learned vector per feature and class to each 1-dimensional feature and constructing a standard MLP that operates on that augmented state. This formulation provides the greatest flexibility as it leverages all the benefits of the universal approximator theorem (Hornik, 1991). Unfortunately, we find the MLP version difficult to train in practice, often resulting in degenerate solutions. Fixing the OOD logit at zero provides a fixed reference point for the other logits. We interpret this as each latent feature voting on how in-distribution the example is on a per-class basis.
5.3 INTEGRALS
The cross-entropy, CE(ŷ(x), y), between the normalized model output, ŷ(x), and the true labels, y, is commonly used to train classification models. In this section, we explore global and local integrals of the cross-entropy loss used to train most classifiers (see Eq. 3) when ŷ is given as the composition of the softmax function, a separable function and a bijective function, e.g., σ(f(h(x))). We can express this as minθ ∑ x,y∈T CE(ŷ(x), y) owing to the finite number of elements in T . Ideally, we would minimize the model cross-entropy over all possible examples in D. We accomplish this by defining the subspace, D(c) ⊂ D, that has a given label and corresponding conditional distribution, p(x|c)
Ex∼p(x|c) [CE (ŷ(x), c)] = −Ex∼p(x|c) [log(ŷc(x))] = − ∫ D(c) log(ŷc(x)) p(x|c)dx. (13) It is not possible to represent ŷ(x) as separable network due to the softmax operation but it is possible to parameterize the unnormalized logits as a separable network. Substituting this parameterization into Eq. 13 and transforming to the latent space yields a bound for the expected cross-entropy
Ex∼p(x|c) [CE (ŷ(x), y)] ≤ − M∑ m=1 ∫ Z(c) fc,m(zm)pm(zm|c)dzm
+ log K+1∑ j=1 M∏ n=1 ∫ Z(c) exp (fj,n(zn)) pn(zn|c)dzn . (14)
See Appendix E for a full derivation. We will utilize this formulation in conjunction with a contrastive prior to supervise OOD examples, Sec. 5.3.1, and with a local prior to enforce consistency, Sec. 5.3.2.
5.3.1 OUT-OF-DISTRIBUTION SUPERVISION
We can utilize the cross-entropy integral in different ways by choosing the label we would like to apply over a domain. For example, we can integrate the cross-entropy over the OOD latent space, U , with the true label fixed to the reject class (c=K+1), and the latent distribution over codes is the contrastive distribution (Sec. 5.1), p(z|c=K+1) = q(z); using Eq. 14 with fK+1,n=0:
LGLBL = log K∑ j=1 M∏ n=1 ∫ U exp (fj,n(zn)) qn(zn)dzn . (15) In effect, Eq. 15 discourages OOD data from being labeled as any in-distribution label. Since the data and contrastive distributions overlap, the model could degenerate and always make OOD decisions; however, this would be in conflict with the standard cross-entropy loss applied to each example in the training set. So long as LGLBL is not weighted too strongly, the model achieves equilibrium by making OOD predictions over regions where the contrastive distribution is more likely. In effect, we supervise OOD training without requiring any OOD data.
5.3.2 LOCAL CONSISTENCY
We may also perform integration over neighborhoods of data points in Tx and utilize that point’s label over the entire region. This integral serves to improve consistency around observations. Adversarial attacks (Goodfellow et al., 2015; Madry et al., 2018) are often constructed as perturbations on real data points and adversarial training finds one such point and includes it in the training set. We can interpret local consistency integrations as a form of average adversarial training. We reformulate the integral to be centered around a particular datum, x0 and its class label, y
LLCL = − ∑ k yk ∫ V log(σ (fk(h(x0) + v))) pV (v)dv (16)
A complication resulting from local integration is how to select the local distribution, pV (v), and the neighborhood, V . There are many reasonable choices depending on the goal of the local integration. For simplicity, we choose each Vm ∈ [−ε, ε] and pm(vm) to be uniformly distributed.
5.4 LOSS COMPONENTS
The final training loss used to evaluate the model is composed of different combinations of the standard cross-entropy loss over class predictions, LCE = − ∑ k yk log (ŷ), the negative log-likelihood loss
over in-distribution data points, LNLL = − log(pZ(h(x))) − log ∣∣∂h ∂x ∣∣, and the integration losses, LGLBL and LLCL. We always include LCE and LNLL. Based on results from previous hybrid networks (Chen et al., 2019; Nalisnick et al., 2019b), we weight the negative log-likelihood by 1/M , which is analogous to using bits per dimension and introduce an additional weight over the cross-entropy, λ, which controls the relative importance of the generative and classification tasks. When included, each integration penalty shares the same weight as LCE with additional binary weight, π
Ltotal = 1
M LNLL + λ (LCE + πGLBLLGLBL + πLCLLLCL) . (17)
6 RELATED WORK
Noise Contrastive Distributions Noise contrastive methods introduce a distribution that is distinct from the data distribution. The constrastive distribution can be learned and provides a mechanism to supervise the model to discriminate between true and contrastive samples (Gutmann & Hyvärinen, 2010). This forces the model to learn discriminative statistical properties of the data. We utilize this idea to inform the null-space over which we will integrate. We fix the data and contrastive distributions and learn the bijective map between input and latent spaces.
Hybrid Models Normalizing flows are a family of flexible, tractable likelihood estimators based on the application of learnable, bijective functions Dinh et al. (2017); Grathwohl et al. (2019). These methods have shown strong performance, both as likelihood estimators and as generative processes. Recent work has coupled advances in bijective networks to construct invertible feature extractors that are capped by a more conventional classifier. The combination bijective/classifier structure are called hybrid models and show reasonable performance as likelihood and discriminative models Chen et al. (2019); Nalisnick et al. (2019b). Unfortunately, the discriminative performance of these models is lower than traditional methods. We utilize hybrid models with constraints that enable tractable integration. Other hybrid models architectures are less restricted but prohibits practical integration.
Out of Distribution Detection OOD detection is a challenge for many modern ML algorithms. Recent work has demonstrated that OOD data can achieve higher likelihoods than the training data Hendrycks et al. (2019); Nalisnick et al. (2019a). Several recent methods have been developed to detect OOD examples including Maximum Softmax Probability (MSP) Hendrycks & Gimpel (2017), Outlier Exposure (OE) Hendrycks et al. (2019), Multiscale Score Matching (MSMA) Mahmood et al. (2021), ODIN Liang et al. (2018), and Certified Certain Uncertainty (CCU) Meinke & Hein (2020). Lee et al. (2018) utilize a GAN trained with a classifier to produce examples near but outside the training distribution. These methods are constructed specifically for OOD detection whereas our method is applicable to a variety of problems.
Semi-supervised Learning Semi-supervised learning is a burgeoning research area that learns from a large data corpus when only a small subset are labeled. The problem setting is very pertinent as it is often easy to acquire data examples but can be extremely time consuming to create the corresponding labels. Many modern methods can achieve very strong performance with very few labels Zhang et al. (2018); Chen et al. (2020). However, most of these methods rely on domainspecific augmentation strategies that are difficult to replicate in new data regimes, i.e., for non-image data. Fortunately, methods such as SSVAE Kingma et al. (2014) and VIME Yoon et al. (2020) are domain agnostic. These methods are built exclusively for semi-supervised learning but we only require an additional regularizing penalty to achieve comparable performance on tabular datasets.
7 EXPERIMENTS
All models were constructed using PyTorch (Paszke et al., 2019), trained using PyTorchLightning (Falcon, 2019), utilized bijectors and distributions from Pyro (Bingham et al., 2018), and were trained using Adam (Kingma & Ba, 2015). We assess the integrable model’s performance in a semi-supervised regime and against OOD examples. See Appendix B and C for additional experiments and training details.
7.1 SPIRALS
We construct a synthetic, 2D dataset composed of three intertwined spirals, see Fig. 4a. Suppose that, due to an unknown sampling bias, we only observe two of the three arms (missing the green arm) and constructed our model as a two-class problem with a third, reject class.
We sample points in a dense 2-dimensional grid at twice the range of the data in both dimensions and evaluate the probability that each point belongs to the two known classes and the OOD class. Figures 4d and 4e illustrate the probability of belonging to the two known classes and Fig. 4f contains the probability that each point is OOD. Red and white values indicate high and low probabilities, respectively. The model successfully identifies the two in-distribution regions. Unsurprisingly, the model also identifies data outside of the training data range as being OOD and, impressively, identifies the unobserved spiral and the region between spirals as being OOD.
Finally, we sample a square box (orange) around a data point (blue) in the latent space and invert the box to assess the appropriateness of the neighborhood in the input space and plot the result, Figs. 4b and 4c. As expected, the bijector converts the latent Euclidean neighborhood into a semantically meaningful neighborhood. Had we included the local cross-entropy integral in this training process, all points within the orange boundary would have received the same label as the data point.
7.2 OUT OF DISTRIBUTION DETECTION
We test how our models respond to OOD examples when trained with global integrals over the noisecontrastive prior and consistency integrals and compare to methods designed cfor OOD detection. See Appendix B.3 for standard performance and Sec. 6 for a discussion of the baselines. Table 1 contains the AUPR for the various methods against similar but different datasets (see Appendix B.4 for the AUROC). We juxtapose SVHN vs CIFAR10 and MNIST vs FMNIST plus Extended MNIST (EMNIST) (Cohen et al., 2017). We include a separable, hybrid model without integral regularizations (hybrid) and the same model with regularizations (Int. Reg.) to illustrate the utility of learning over hypervolumes. When MNIST or FMNIST are the in-distribution set, the regularized, integrable network performs on par with the best baseline methods. SVHN and CIFAR10 show reasonable OOD performance but are not as strong as the baselines. This is not surprising since the integrals rely on a reasonable estimate of p(x), which both datasets have failed to achieve, Table 5.
7.3 SEMI-SUPERVISED LEARNING
We apply the local integral to the separable hybrid model and train on several tabular datasets with 10% of the labels and domain-specific augmentation strategies are unavailable. These models do not utilize the OOD class or global penalty. We apply pseudo-labelling with thresholds of 0.9-0.95. Table 2 compares the perfor-
mance of the integrated hybrid models to several standard (non-image) semi-supervised baselines on flat MNIST (MNIST, flattened to a vector), MiniBooNE Bazarko (2001), and HepMass Baldi et al. (2016). We utilize a CNF Grathwohl et al. (2019) as the bijector and the hinge as the classifier. We see that the integrated model achieves similar-to-better performance than the other methods on all datasets, showcasing the generality of integrated regularizers in different problem spaces.
8 CONCLUSIONS
In this work, we develop architectures that enable tractable integration over the data space. We demonstrate that the ability to supervise regions and not isolated points encourages the model to learn better representations and be less likely to degenerate. We consider several formulations that allow us to regularize the model’s behavior based on consistency and contrast without relying on augmentations of and sparse comparisons between a finite collection of data points. We experiment with the various integrals to obtain promising out-of-distribution detection. Through now tractable integrals, this work enables future methods and applications for learning over continuous regions.
ACKNOWLEDGMENTS
This research was partly funded by grants NSF IIS2133595 and NSF 2113345 and by NIH 1R01AA02687901A1.
A PROOFS
A.1 ADDITIVELY SEPARABLE FUNCTIONS
Theorem A.1 (Additive Independent Integration). Given an additively separable function, f(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [f (v)] = M∑ m=1 ∫ Dvm fm (vm) pm (vm) dvm (A1)
Proof. ∫ fn (vn) pm (vm) dvm = { fn (vn) , if n 6= m∫ fn (vn) pn (vn) dvn, else
(A2)
Ev∼p(v) [f (v)] = ∫ D ( M∑ n=1 fn (vn) )( M∏ m=1 pm(vm) ) dv (A3)
= ∑ n ∫ D fn (vn) M∏ m=1 pm(vm) dv (A4)
Substituting Eq. A1 into Eq. A3 reduces the M -dimensional integral over independent likelihoods into a 1-dimensional integral and completes the proof.
A.2 MULTIPLICATIVELY SEPARABLE FUNCTIONS
Theorem A.2 (Multiplicative Independent Integration). Given a multiplicitively separable function, g(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [g (v)] = M∏ m=1 ∫ Dvm gm (vm) pm (vm) dvm (A5)
Proof. Since each component of gn is matched to a component in pm, the two products can be recombined to isolate like-dimensions. Each dimension can then be integrated independently.
Ev∼p(v) [g (v)] = ∫ dv ( M∏ n=1 gn(vn) )( M∏ m=1 pm(vm) )
= ∫ · · · ∫ ( M∏
n=1
gn(vn)pn(vn) ) dv1 · · · dvM
= ∫ g1(v1)p1(v1)dv1 · · · ∫ gM (vM )pM (vM )dvM
B ADDITIONAL EXPERIMENTS
All image datasets were trained using Glow as the bijector with the number of levels adjusted for the size of the image. MNIST and Fashion MNIST were trained with Adam for 50 epochs with an exponentially decaying learning rate of γ = 0.75. SVHN and CIFAR10 utilized a similar training procedure but with 75 epochs and a slower exponential decay of γ = 0.99. We find that utilizing an average instead of a sum over separable features and soft-thresholding at −1 improves training stability.
B.1 ADVERSARIAL ROBUSTNESS
We test the efficacy of the local cross-entropy integral by utilizing a synthetic dataset from (Tsipras et al., 2019). The data is constructed via
y ∼ {−1, +1}, x1 = {
+y, w.p. p −y, w.p. 1− p , x2, ..., xM+1 ∼ N (ηy, 1) (B1)
and the adversarial examples are constructed analytically by, essentially, flipping the sign of η. We utilize this synthetic dataset to test adversarial robustness because it provides an upper bound on the adversarial accuracy relative to the standard accuracy which allows us to quantify our performance with respect to an absolute. Additionally, because adversarial examples can be constructed exactly without relying on an optimization procedures, there can be no concerns that this defense relies on obfuscated gradients (Athalye et al., 2018). We refer the interested reader to the original paper for a thorough discussion.
We choose M = 10, p = 0.95, and η = 2/ √ M . For these choices, a strong classifier will obtain near perfect standard accuracy with zero adversarial accuracy while a robust classifier will obtain 95% standard and adversarial accuracy. We use a similar architecture to that found in Sec. C.1 except we utilize ten blocks instead of five. We train one model with only the standard crossentropy and negative log-likelihood losses and a second model with the additional global and local cross-entropy integration loss. We find that the model with only the standard losses achieves reasonably good standard performance and fairly poor adversarial
accuracy. The model with the additional integration losses contains considerably better adversarial performance, nearing the upper and, necessarily lower, standard accuracy bound. Table 3 summarizes our results and the ideal performances.
B.2 TOY SEMI-SUPERVISED REGRESSION
To illustrate the utility of global regularizations, we construct a one-dimensional, semi-supervised problem with x ∼ N (0, 4) and y = tanh(x). We keep all values of x but remove y values corresponding to negative x during training. Unlike the standard semi-
supervised problem, the missing labels are not random but are the result of limitations or bias in the data collection process. We train two models that are identical except that the integrated model includes a penalty over E [Ω (ŷ (x))] based on foreknowledge that the average value is zero. Table 4 shows how the models perform over the supervised and unsupervised regions. The inclusion of the integration regularizer allows the model to reason about regions that it has not directly observed and has decreased the error in that region by two orders of magnitude.
B.3 STANDARD PERFORMANCE
We test the impact of integrable models on OOD performance against several standard image classification datasets: MNIST (Deng, 2012), Fashion MNIST (FMNIST) (Xiao et al., 2017), SVHN (Netzer et al., 2011), and CIFAR10 (Krizhevsky et al., 2009). See Appendix B for architures, distributions, and training parameters. Table 5 contains the validation standard accuracy and bits per dimension for all datasets with all integration regularizers. The upper portion of the table is averaged over 10 runs and contains a reject option in the separable model. The lower portion is a single run without a reject option or any integrated regularizers. We find that the model performs reasonably well for MNIST and
FMNIST but the integrated losses cause a large degredation in performance for SVHN and CIFAR10. The removal of these losses produces similar accuracies but much-improved BPD, consistent with the hybrid network results reported by ResidualFlows (Chen et al., 2019) when Glow is used.
B.4 OUT OF DISTRIBUTION DETECTION AUROC COMPARISONS
The separably model and independent latent dimensions allows us to reason about how the model is making decisions in the latent space. Unfortunately, for most bijectors, it is not possible to carry this interpretability back to the input space. Figure 5 demonstrates the final state of the model trained on Fashion MNIST for several features with respect to the latent distribution, per in-distribution class and juxtaposed with the out of distribution data. Specifically, the top row contains the logit components, fk,m, learned by the separable network, color-coded by class; the middle row contains the distribution of each class (colors matched to logits); and the bottom row contains the distribution of the in-distribution data (blue) and OOD data (red). The top row illustrates how the classification network makes its decisions per feature over the distribution presented in the middle row. We see that the logit components map well to the distribution of the data in each feature and provides some intuition for how certain the classifier is over a feature. This demonstrates how this architecture allows for reasonable interpretibility from the latent space. Any value above zero (the dotted black line) is considered in-distribution and the most likely class is the line with the greatest value at that point. The features were chosen to demonstrate the diversity of the learned solution over features. Generally, the data maps reasonably well to the bimodal distribution, though we do occassionally see mode collapse as in Fig. 5e. Fortunately, in these cases the logits tend to be fairly uninformative and only introduce a small bias. Figures 5a through 5c show a common trend where the OOD data has heavy overlap with one of the two clusters but not the other. While we do see some diversity amongst the in-distribution classes that overlap with the OOD data the “bags” (gray) and “sandals” (cyan) class overlap most often. Finally, Fig. 5d demonstrates a latent feature where the OOD data falls in the region between the two data components.
B.6 MNIST LEAVE-ONE-OUT
Following (Ahmed & Courville, 2020), we test the robustness of our method against semantic OOD examples by training models on all but one class from MNIST (Deng, 2012) and assessing OOD performance on the held out class. We repeat the process for each class and report the AUROC and AUPR. In all cases, we train using ten different random seeds and report the average performance and standard deviation across seeds that do not degenerate against the in-distribution validation set. Table 8 and Table 7 contain the results of our method using both integrated losses compared to several relevant baselines. Results from Efficient GAN-Based Anomaly Detection (EGBAD) (Zenati et al., 2018) and GANomaly (Akcay et al., 2018) are taken by estimating the results from figures in both works. The Semantic Anomalies (Ahmed & Courville, 2020) results are obtained by executing the official version of the code using the rotation auxiliary task with two different batch sizes (128 and 256). The Semantic Anomalies results in both tables are the best across both batch sizes and detection methods (ODIN (Liang et al., 2018) and MSP (Hendrycks & Gimpel, 2017)) based on the AUPR.
We see that the Semantic Anomalies generally achieves the best AUROC across all digits. The integration often achieves the best AUPR but struggles with certain held out digits. In particular, the integration performs significantly worse on the “4” and “9” digits. This is a reasonable confuser due to the similarities and variability of the two classes. These results indicate that the integration penalties are helpful for inducing the model to detect semantic anomalies.
C TRAINING AND ARCHITECTURES
The bijective layer is composed of five blocks where each block contains an affine-coupling layer (Dinh et al., 2017) and an invertible fully connected layer. The data distribution is bimodal over both latent dimensions with means at ±1 and standard deviations of 0.4. The separable network is composed of quadratic functions. We train the model as discussed, using the standard cross-entropy
loss for each observed point, the negative log-likelihood over the input space, and the global crossentropy integration with respect to the contrastive prior. The learning rate is set to 0.001 with standard weight decay of 1e-4 over 20 epochs using Adam (Kingma & Ba, 2015).
C.2 FASHION MNIST
The bijective layers are composed of the Glow (Kingma & Dhariwal, 2018) architecture with two levels composed of 32 steps and 512 channels. We utilize an open-source Glow implementation1 wrapped in Pyro’s (Bingham et al., 2018) bijective operators. The data distribution is bimodal over all latent dimensions with means at±1 and standard deviations of 0.5. The noise constrastive distribution is a standard Gaussian. The separable network is composed of hinge functions (see Sec. 4.3). We utilize a batch size of 256, a learning rate of 1.5e-4 over the bijective layers, and a learning rate of 1.0e-3 over the separable layers using Adam (Kingma & Ba, 2015). Both learning rates are exponentially decayed at a rate of 0.75 per epoch. Standard weight decay is applied with a weight of 1e-4. The network is trained for 50 epochs. The adversarial attacks are performed against the model at the epoch with the lowest validation cross-entropy loss.
D APPROXIMATE EXPECTED CROSS-ENTROPY OVER A DOMAIN
We desire the expected cross-entropy, CE(ŷ(x), yc), over a domain Dc where yc is the desired probability vector over classes, ŷ(x) is the model’s prediction of y ∈ RK from x ∈ RM and is composed of a bijective function, h, a separable function, fk(x) = ∑ m fk,m(xm), and the soft-max function, σ, such that ŷ(x) = σ(f(h(x))). If we are attempting to regularize the model to produce a single class label over Dc, then y will correspond to a one-hot vector. In general, however, yc may be a dense vector with elements yk. The expected cross-entropy is expressed as
EDc [CE(ŷ(x), yc)] = − K∑ k=1 yk EDc [log(ŷk(x))] = − K∑ k=1 yk ∫ Dc log(ŷk(x)) pDc(x|y)dx (D1)
where pDc(x|y) is the probability density function over Dc conditioned on y. We can take advantage of the bijective transform to convert this expression into an expectation over the latent space, z, and latent domain, Zc, which allows us to write
K∑ k=1 yk EDc [log(ŷk(x))] = K∑ k=1 yk EZc [log(σ(f(z))] = K∑ k=1 yk ∫ Zc log(σk(f(z))) pZc(z|y)dz
(D2)
where σk is the kth element after the soft-max normalization and pZc(z|y) is the independent probability density function in the latent space, e.g., pZc(z|y) = ∏ m pm(zm). We can then expand
1https://github.com/chrischute/glow
the soft-max operator within the expectation to get
K∑ k=1 yk EZc [log(σ(f(z))] = −EZc log K∑ j=1 exp(fj(z)) + K∑ k=1 yk EZc [fk(z)] (D3)
The combination of the separable function and independent densities allows for an exact simplification of the second term into
K∑ k=1 yk EZc [fk(z)] = K∑ k=1 yk M∑ m=1 ∫ Zm fk,m(zm)pm(zm)dzm (D4)
≈ 1 G K∑ k=1 yk M∑ m=1 G∑ g=1 fk,m(z̃m,g)
where we have overloaded Zm to correspond to the domain of the mth latent dimension and have approximated the integral via Monte Carlo integration with z̃m,gp̃m(zm) and G draws. The first term on the right-hand side of Eq. D3 does not enjoy the same exact simplification. However, it is possible to bound the term via Jensen’s Inequality by moving the logarithm into the sum over j or outside the expectation operator. We consider the latter case where
EZc log K∑ j=1 exp(fj(z)) ≤ log EZc K∑ j=1 exp(fj(z)) . (D5) Then, we expand the separable function, exchange the order of the sum over dimensions and the exponent, take advantage of the multiplicative integral simplification (Eq. A5), approximate via Monte Carlo integration, and utilize the log ∑ exp operator for stability:
log EZc K∑ j=1 exp(fj(z)) = log EZc K∑ j=1 exp( M∑ m=1 fj,m(zm)) (D6) = log
K∑ j=1 ∫ Zc dz M∏ m=1 exp(fj,m(zm)) M∏ n=1 pn(zn) = log
K∑ j=1 M∏ m=1 ∫ Zm exp(fj,m(zm))pm(zm)dzm ≈ log
K∑ j=1 M∏ m=1 G∑ g=1 exp(fj,m(z̃m,g)) −M log(G) = log
K∑ j=1 exp M∑ m=1 log G∑ g=1 exp(fj,m(z̃m,g))−M log(G).
Finally, we can substitute Eq. D4 and Eq. D6 into Eq. D1 to get
EDc [CE(ŷ(x), yc)] ≤ − K∑ k=1 yk M∑ m=1 ∫ Zm fk,m(zm)pm(zm)dzm (D7)
+ log K∑ j=1 M∏ m=1 ∫ Zm exp(fj,m(zm))pm(zm)dzm ≈ − 1
G K∑ k=1 yk M∑ m=1 G∑ g=1 fk,m(z̃m,g)
+ log K∑ j=1 exp M∑ m=1 log G∑ g=1 exp(fj,m(z̃m,g))−M log(G)
In the event that yc is a one-hot vector (at the cth element) this simplifies to
EDc [CE(ŷ(x), yc)] ≤ − M∑ m=1 ∫ Zm fc,m(zm)pm(zm|c)dzm (D8)
+ log K∑ j=1 M∏ m=1 ∫ Zm exp(fj,m(zm))pm(zm|c)dzm where we reintroduced the condition over the class within the probability densities pm(zm|c) for emphasis. | 1. What is the focus of the paper regarding neural networks?
2. What are the strengths of the proposed approach, particularly in terms of its practicality and novelty?
3. Are there any concerns regarding the combination of known methodologies in the proposed method?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. What are some of the compelling applications of the method demonstrated in the experimental section? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes using separable bijective networks - that is, a two-stage network where the first stage is bijective (data-flow either way) and the second stage is separable (multiplicative or additive) - to make it practical to include integrals (on the input) as performance goals.
Review
The paper is clearly written and relatively easy to follow. The method combines known methodologies in a relatively straightforward but novel manner to achieve the desired goal, which is well-motivated. The exposition is clear and the experimental section covers a good number of compelling applications for the method.
One potential issue with this paper is that the method described simply combines known approaches. Overall I don't consider this a particularly serious flaw but it is something to consider. |
ICLR | Title
Practical Integration via Separable Bijective Networks
Abstract
Neural networks have enabled learning over examples that contain thousands of dimensions. However, most of these models are limited to training and evaluating on a finite collection of points and do not consider the hypervolume in which the data resides. Any analysis of the model’s local or global behavior is therefore limited to very expensive or imprecise estimators. We propose to formulate neural networks as a composition of a bijective (flow) network followed by a learnable, separable network. This construction allows for learning (or assessing) over full hypervolumes with precise estimators at tractable computational cost via integration over the input space. We develop the necessary machinery, propose several practical integrals to use during training, and demonstrate their utility.
1 INTRODUCTION
Most supervised learning problems operate by training a model on a finite collection, T , of N (typically paired) examples, (x, y). The model is updated by comparing its predicted output to the expected output and performing some flavor of stochastic gradient descent based on the comparison and various regularizers. The process is repeated by reexamining the elements of T in a random order, possibly with augmentation, until the model parameters converge or an iteration budget is exceeded. This relatively simple procedure has proven to be remarkably effective in a variety of domains and these models have begun to permeate every aspect of modern science and everyday life (He et al., 2016; Silver et al., 2017; Brown et al., 2020).
The deep learning revolution has also resulted in highly effective generative models such as VAEs (Kingma & Welling, 2014), GANs (Goodfellow et al., 2014), and tractable likelihood models (Dinh et al., 2017; Oliva et al., 2018; Grathwohl et al., 2019). These models are largely used to create novel samples of impressive quality. In addition to sampling, likelihood models provide an estimate of the probability density function of the data which can be used for additional, downstream processes.
We augment the training process by constructing neural networks that allow for tractable integration over the input domain. This differs from implicit layers which utilize integration over a parameterized variable (Chen et al., 2018; Grathwohl et al., 2019). Access to fast and differentiable integrals allows us to regularize a model’s average behavior using metrics that may not be available otherwise. Integration over the input space also allows us to supervise how the model behaves in continuous regions that are not directly observed in T and may even be out-of-distribution (OOD). Alternative methods attempt to supervise examples outside of T by performing random perturbations (Gutmann & Hyvärinen, 2010), along the line between known examples (Zhang et al., 2018), or via a generative process (Zenati et al., 2018; Akcay et al., 2018). However, these methods are only capable of observing a small quantity of the total space. By integrating over entire regions, it is possible to observe a large portion of the space based on statistical relevance.
Main Contributions We propose a general architecture that enables tractable integration over the input space, enabling supervision and custom regularization over continuous regions. We demonstrate how to construct this network and how it allows for a reduction in the computation cost required for dense numeric integration from exponential in the number of dimensions to linear. We derive several useful integrated formulations over continuous regions. Finally, we explore the impact of this architecture and regularizers on the standard accuracy and robustness to OOD examples on several standard classification datasets. The code utilized in this paper can be found at https://github.com/lupalab/sep_bij_nets.
Notation Throughout this work, we consider theM dimensional input features, x = [x1, ..., xM ] ∈ RM ; the latent features, z ∈ Z ⊆ RM ; and the K-wise classification probability, y. The input features x the training set, Tx, are drawn from the in-distribution data, D ⊆ RM . Subscripts represent a particular dimension, e.g., Dm corresponds to the mth dimension of the space. Paranthetical superscripts represent the subspace corresponding to a particular class, e.g., D(c) is the subset of D where the data belongs to class c. The bijective network is given as h such that h : D → Z . Probability distributions overD and Z are given by p with the corresponding subscript. Classification networks are given as f and g. Variables with a “hat,” ŷ, are predictions of the true quantity, y.
2 MOTIVATION
Neural networks are highly effective function approximators between two (typically) continuous spaces: f : X → Y . However, networks are typically trained and evaluated using a finite collection of points without any explicit assessment of the complete hypervolume spanned by the data. This omission is understandable from an implementation perspective as the number of samples required to obtain a reasonable estimate over a volume scales exponentially with data dimensionality. However, human beings often have an understanding of how a process should behave on average. Ideally, we would like to embed this intuition into the model but currently cannot assess the average performance of a trained model outside of the held-out test set. Specifically, we would like to regularize the model by estimating the expected behavior of some metric, Ω, produced by the model over the training data
Ex∼p(x) [Ω(ŷ(x))] = ∫ X Ω(ŷ(x))p(x)dx. (1)
There are many useful choices of Ω over a variety of applications. If it is known what the model output should be on average (ȳ), we can construct Ω to encourage that behavior, e.g., Ω(ŷ) = (ȳ− ŷ)2. Minimizing consistency metrics (Xie et al., 2020) are a common method to improve learning in label-starved problems. These encourage the model to produce similar outputs over neighborhoods around (labeled) examples from Tx where neighborhoods are created by random or domain-specific augmentations. This process can be viewed as an approximation to an integral over the neighborhood,
E ∼p( ) [L(y, ŷ(x + ))] = ∫ L(y, ŷ(x + ))p( )d (2)
where L is a distance-like metric, and is the neighborhood. Equation 2 can be generalized to other neighborhoods. We can recast the standard classification problem as a discrete approximation to an integral. Typically, we minimize the cross-entropy between ŷ(x; θ) and y over the model parameters, θ, for all (x, y) ∈ T which becomes an integral over class-conditioned distributions, p(x|c),
min θ − ∑ x,y∈T ∑ k yk log (ŷk(x; θ))⇒ min θ − ∑ k ∫ D(k) yk log (ŷk(x; θ)) p(x|k)dx. (3)
Unfortunately, integration in high dimension is difficult. Naive gridded solutions require an exponential number of points with error decreasing as O(G−M ), for G, M -dimensional points. Monte Carlo (MC) methods theoretically have better performance with error that decreases asO(G−1/2). However, the rate of convergence for MC methods depends on the variance of samples (Veach, 1998), which may make for poor approximations in practice. Importance sampling (Bishop, 2006) can improve the performance of Monte Carlo methods to adapt to the regions with the greatest contribution.
We choose to model the data using a separable function. Separable functions have the key benefit of decomposing M -dimensional integrals into a combination of M one-dimensional integrals. Each
of these one-dimensional integrals can then be solved using any number of highly-accurate solvers (e.g., Runge-Kutta (Shampine, 2005), Dormand-Prince (Dormand & Prince, 1980), Tsit (Tsitouras, 2011), etc.) that have error rates better than O(G−4) but are unavailable in high dimensions. The use of separable functions is a component of the VEGAS algorithm (Peter Lepage, 1978) and is utilized in conjunction with adaptive Monte Carlo sampling to approximate high-dimensional integrals.
The use of a separable function over the input space may make estimation of integrals over the model more accessible; however, they impose a strong, inappropriate inductive-bias. The obvious approach of utilizing a standard neural network as a feature extractor and integrating over learned features means that we would no longer have a valid integrator over the input space. We propose to solve this problem by utilizing bijective transforms prior to the separable network. The bijective transform allows us to decouple the data into a latent space where the data can be modeled using a separable network and guarantees equality between integrals in the latent space and integrals in the input space.
3 BACKGROUND
We perform integration over the input space by splitting neural network models down into two key components: (1) a bijective feature extractor, (2) a separable task network, see Fig. 1. For simplicity, we only consider classification tasks in this work. This makes our total network analogous with the common architecture where a classifier, often a linear layer or an MLP, is constructed on a feature extractor, such as a CNN. Unlike the typical process, we must place constraints on both networks so that we can integrate over the input domain. This network breakdown is similar to hybrid networks (Chen et al., 2019; Nalisnick et al., 2019b) except for the separability requirement on the classifier.
3.1 BIJECTIVE NETWORKS
Bijective networks are the key component in flow-based likelihood models. A bijective network, h : D → Z , has a known forward and inverse operation so that data can exactly be reconstructed after the transformation. This allows for exact likelihood estimation via the change of variables formula:
z = h(x; θ), x = h−1(z; θ), pX(x) = pZ(h(x; θ)) ∣∣∣∣∂h∂x ∣∣∣∣ (4)
where pZ is a predefined distribution over the latent space, often a standard Gaussian.
These models are trained via Eq. 4 to maximize the likelihood of the examples in T . Once trained, flow-based likelihood models are commonly used as a generative process where samples are drawn from pZ and are then inverted through h to arrive at an example in D. Instead, we will take advantage of the fact that we can choose pZ and then utilize it in downstream tasks. Normalizing flow bijectors provide rich feature extractors that can represent a distribution of complicated inputs with simplydistributed, independent features, z. Given the independence and expressibility of these learned independent features, we build estimators using separable functions over z, which enables us to integrate over the data’s domain while retaining expressibility.
The requirement for bijectivity places a strong constraint on network design, eliminating many common choices due to the need to maintain dimension or invert element-wise activations. Even naive convolutional operations become unavailable since they are not generally invertible. Modern advances have demonstrated methods to work around these limitations through the use of clever partitioning and coupling tricks Dinh et al. (2017) or the use of constraints Chen et al. (2019). However, the field of learnable bijective functions is less advanced then its unconstrained counterparts which results in reduced performance on auxiliary tasks. We utilize Glow Kingma & Dhariwal (2018) to process image data and continuous normalizing flows Grathwohl et al. (2019) for tabular data.
3.2 SEPARABLE FUNCTIONS
Separable functions have long been used in mathematics and physics to solve simple partial differential equations such as the homogeneous wave and diffusion equations (Strauss, 2007). We consider two types of separable functions, additive and multiplicative. All proofs can be found in Appendix A.
3.2.1 ADDITIVELY SEPARABLE FUNCTIONS
Definition 3.1. A function, f : CM → CK , is additively separable if it is composed as a summation of element-wise functions operating independently on each dimension of the input:
f (v) = M∑ m=1 fm (vm;φm) (5)
Theorem 3.1 (Additive Independent Integration). Given an additively separable function, f(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [f (v)] = M∑ m=1 ∫ Dvm fm (vm) pm (vm) dvm (6)
3.2.2 MULTIPLICATIVELY SEPARABLE FUNCTIONS
Definition 3.2. A function, g : CM → CK , is multiplicatively separable if it is composed as a product of element-wise functions operating independently on each dimension of the input:
g (v) = M∏ m=1 gm (vm;ψm) (7)
Theorem 3.2 (Multiplicative Independent Integration). Given a multiplicitively separable function, g(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [g (v)] = M∏ m=1 ∫ Dvm gm (vm) pm (vm) dvm (8)
4 METHOD
Both forms of separable functions allow us to decompose a single M -dimensional integral into M 1-dimensional integrals. A dense estimation of the integral without taking advantage of a separable function usingG points per dimension would requireO(GM ) network evaluations. This is completely impractical for modern datasets where M is at least on the order of hundreds and G should be as large as possible. Exploiting separable functions allow us to reduce the complexity to O(GM). However, it is unlikely that we could construct a separable function directly on the input space and achieve reasonable performance. Doing so would essentially require that each input dimension contribute to the final output independently of the others. Instead, we can combine Eq. 4 and Eq. 6 (alternatively, Eq. 8) to perform practical integration over the input domain while still allowing for inter-dependent contributions from each input dimension. To do so, we first let the latent distribution, pZ , be independently (but not necessarily identically) distributed: pZ(z) = ∏ m pm(zm). This allows us to write the integral over the input space in terms of the latent space and then simplify via Eq. 6.∫ D f(h(x))pZ(h(x)) ∣∣∣∣∂h∂x ∣∣∣∣ dx = Ex∼pX(x) [f(h(x))]
= Ez∼pZ(z) [f(z)] = ∫ Z pZ(z)f(z)dz = M∑ m=1 ∫ Zm fm(zm) pm (zm) dzm (9)
Each 1-dimensional integral can be easily approximated using a variety of integration approaches. In addition to the requirements placed on the feature extractor and task network, this formulation also
requires that we define the integration domain in the latent space as a Cartesian product of domains over each latent dimension. This may seem like a strong requirement since we apparently lose some level of interpretability; however, there are several advantages to defining the input domain in this learned space. Most notably, we know the data distribution in this space exactly and can tune the domain based on the goal of a particular integral.
Figure 2 contains a cartoon example that demonstrates how these methods combine to allow for integration over the complex data space. Both plots show the value of f ; Fig. 2a shows the nonseparable function in the input space and Fig. 2b shows the same data in the (separable) latent space, after the bijective transformation. The colored points and dotted lines are in correspondence between the two spaces and illustrate how the space is warped by the bijector to create a separable, independent latent space. The light gray lines represent the contours of the underlying probability distribution. We define both the integration domain and perform the integration in the latent space. For simplicity, we illustrate the integration using five, equally-spaced points (in both dimensions). The naive procedure would require sampling f at each point in the grid (the blue points). The use of the separable functions allow us to integrate using only the red points and get the same estimate.
5 PRACTICAL APPLICATIONS
In this section we discuss specific choices made when constructing the latent distribution, separable networks, and various integrals. For classification tasks, it is not possible to parameterize the normalized network output of class probabilities, ŷ, as a separable network since each prediction must sum to one. However, it is possible to parameterize each unnormalized logits as a separable network, e.g., ŷ(x) = σ (f(h(x))), where σ is the softmax operator and the output of f are the unnormalized logits. While this limits what quantities can be integrated over exactly, it is still possible to integrate over approximations/bounds without resorting to brute-force methods.
Out-of-Distribution Detection We construct a global integration regularizer to provide resilience to out-of-distribution (OOD) examples. We enable OOD detection by introducing a “reject” option (Hendrickx et al., 2021) into the classification vector, increasing the number of classes by one, where the additional class indicates that the instance is OOD. An example is considered OOD if ŷK+1 exceeds a predefined threshold. We supervise the model over out-of-distribution examples by integrating the cross-entropy over the contrastive probability distribution, q(z) (see Sec. 5.3.1).
Semi-supervised Learning We build a local integral to enhance consistency within latent neighborhoods and utilize it in place of pre-chosen augmentation strategies to inform a label-starved dataset. Specifically, we introduce a loss where all examples near a real, labeled example are regularized to share the real example’s label as in Eq. 2 (see Sec. 5.3.2). We additionally use pseudo-labels Lee (2013) so that consistency is maintained about unlabelled points as the model grows more confident.
5.1 LATENT DISTRIBUTIONS
A critical component of this method is that the likelihoods over the latent space must be independent: pZ(z) = ∏ m pZ(zm). As this is extremely unlikely to occur in the input space, we learn a bijective (flow) transformation from the input space to a latent space where the likelihood can be decomposed into independent components of our choice. Since we would like to use the latent features to discriminate between classes using a separable function, we choose to utilize a (bimodal) Gaussian mixture model. This allows the model to put
different classes into one of two “buckets” per feature and enables classification through multiple features. This results in an exponential number of components (with respect to dimension) in the full latent space, e.g., 2M components. However, we can easily offset this by choosing some dimensions to use a unimodal latent distribution while the remaining dimensions are still bimodal:
pZm(zm) =
{ 0.5 N (−µ, σ21) + 0.5 N (µ, σ21), if m ≤ L
N (0, σ22), else (10)
In addition to the typical latent data distribution, pZ(z), we explicitly include a contrastive distribution, q(z). This distribution is critical if we desire to regularize our model outside the data distribution, i.e., setting some background/default behavior. When using a bimodal data distribution, we additionally desire that latent vectors are more likely under this distribution once the vector is sufficiently far away from any single in-distribution mode. Therefore, we select this distribution so that it fully encapsulates all the components of the data distribution, e.g., qm(zm) > pZm(zm) for | |zm|−µ| > γ. When the data distribution is unimodal we choose pm = qm so that the feature is uninformative.
We utilize a standard Gaussian for the contrastive distribution and set µ to 1 and experiment with σ1 ≤ 0.5. This choice allows us to formulate OOD examples that exist between the in-distribution data as near zero, and outside the in-distribution data, near the tails. Figure 3 illustrates the data and contrastive distributions for a latent feature for convenience.
5.2 SEPARABLE NETWORKS
We explore three different formulations of separable networks. The first formulation learns a distinct quadratic function, the second learns a symmetric hinge, and the third learns a multi-layer perceptron. These functions have distinct parameterizations for each latent feature and each in-distribution class. The out-of-distribution logit is held fixed at zero: lK+1(x) = 0.
f (quad)k,m (zm;αk,m, uk,m, νk,m) = αk,m − (zm − uk,m)2
2e2νk,m (11)
f (hinge)k,m (zm;αk,m, uk,m, νk,m) = αk,m − |zm − uk,m| e νk,m (12)
where k is the logit class index and m is the feature dimension so that the total unnormalized logit, lk(z), is lk(z) = ∑ m fk,m(zm). The separable MLP is constructed by concatenating a learned vector per feature and class to each 1-dimensional feature and constructing a standard MLP that operates on that augmented state. This formulation provides the greatest flexibility as it leverages all the benefits of the universal approximator theorem (Hornik, 1991). Unfortunately, we find the MLP version difficult to train in practice, often resulting in degenerate solutions. Fixing the OOD logit at zero provides a fixed reference point for the other logits. We interpret this as each latent feature voting on how in-distribution the example is on a per-class basis.
5.3 INTEGRALS
The cross-entropy, CE(ŷ(x), y), between the normalized model output, ŷ(x), and the true labels, y, is commonly used to train classification models. In this section, we explore global and local integrals of the cross-entropy loss used to train most classifiers (see Eq. 3) when ŷ is given as the composition of the softmax function, a separable function and a bijective function, e.g., σ(f(h(x))). We can express this as minθ ∑ x,y∈T CE(ŷ(x), y) owing to the finite number of elements in T . Ideally, we would minimize the model cross-entropy over all possible examples in D. We accomplish this by defining the subspace, D(c) ⊂ D, that has a given label and corresponding conditional distribution, p(x|c)
Ex∼p(x|c) [CE (ŷ(x), c)] = −Ex∼p(x|c) [log(ŷc(x))] = − ∫ D(c) log(ŷc(x)) p(x|c)dx. (13) It is not possible to represent ŷ(x) as separable network due to the softmax operation but it is possible to parameterize the unnormalized logits as a separable network. Substituting this parameterization into Eq. 13 and transforming to the latent space yields a bound for the expected cross-entropy
Ex∼p(x|c) [CE (ŷ(x), y)] ≤ − M∑ m=1 ∫ Z(c) fc,m(zm)pm(zm|c)dzm
+ log K+1∑ j=1 M∏ n=1 ∫ Z(c) exp (fj,n(zn)) pn(zn|c)dzn . (14)
See Appendix E for a full derivation. We will utilize this formulation in conjunction with a contrastive prior to supervise OOD examples, Sec. 5.3.1, and with a local prior to enforce consistency, Sec. 5.3.2.
5.3.1 OUT-OF-DISTRIBUTION SUPERVISION
We can utilize the cross-entropy integral in different ways by choosing the label we would like to apply over a domain. For example, we can integrate the cross-entropy over the OOD latent space, U , with the true label fixed to the reject class (c=K+1), and the latent distribution over codes is the contrastive distribution (Sec. 5.1), p(z|c=K+1) = q(z); using Eq. 14 with fK+1,n=0:
LGLBL = log K∑ j=1 M∏ n=1 ∫ U exp (fj,n(zn)) qn(zn)dzn . (15) In effect, Eq. 15 discourages OOD data from being labeled as any in-distribution label. Since the data and contrastive distributions overlap, the model could degenerate and always make OOD decisions; however, this would be in conflict with the standard cross-entropy loss applied to each example in the training set. So long as LGLBL is not weighted too strongly, the model achieves equilibrium by making OOD predictions over regions where the contrastive distribution is more likely. In effect, we supervise OOD training without requiring any OOD data.
5.3.2 LOCAL CONSISTENCY
We may also perform integration over neighborhoods of data points in Tx and utilize that point’s label over the entire region. This integral serves to improve consistency around observations. Adversarial attacks (Goodfellow et al., 2015; Madry et al., 2018) are often constructed as perturbations on real data points and adversarial training finds one such point and includes it in the training set. We can interpret local consistency integrations as a form of average adversarial training. We reformulate the integral to be centered around a particular datum, x0 and its class label, y
LLCL = − ∑ k yk ∫ V log(σ (fk(h(x0) + v))) pV (v)dv (16)
A complication resulting from local integration is how to select the local distribution, pV (v), and the neighborhood, V . There are many reasonable choices depending on the goal of the local integration. For simplicity, we choose each Vm ∈ [−ε, ε] and pm(vm) to be uniformly distributed.
5.4 LOSS COMPONENTS
The final training loss used to evaluate the model is composed of different combinations of the standard cross-entropy loss over class predictions, LCE = − ∑ k yk log (ŷ), the negative log-likelihood loss
over in-distribution data points, LNLL = − log(pZ(h(x))) − log ∣∣∂h ∂x ∣∣, and the integration losses, LGLBL and LLCL. We always include LCE and LNLL. Based on results from previous hybrid networks (Chen et al., 2019; Nalisnick et al., 2019b), we weight the negative log-likelihood by 1/M , which is analogous to using bits per dimension and introduce an additional weight over the cross-entropy, λ, which controls the relative importance of the generative and classification tasks. When included, each integration penalty shares the same weight as LCE with additional binary weight, π
Ltotal = 1
M LNLL + λ (LCE + πGLBLLGLBL + πLCLLLCL) . (17)
6 RELATED WORK
Noise Contrastive Distributions Noise contrastive methods introduce a distribution that is distinct from the data distribution. The constrastive distribution can be learned and provides a mechanism to supervise the model to discriminate between true and contrastive samples (Gutmann & Hyvärinen, 2010). This forces the model to learn discriminative statistical properties of the data. We utilize this idea to inform the null-space over which we will integrate. We fix the data and contrastive distributions and learn the bijective map between input and latent spaces.
Hybrid Models Normalizing flows are a family of flexible, tractable likelihood estimators based on the application of learnable, bijective functions Dinh et al. (2017); Grathwohl et al. (2019). These methods have shown strong performance, both as likelihood estimators and as generative processes. Recent work has coupled advances in bijective networks to construct invertible feature extractors that are capped by a more conventional classifier. The combination bijective/classifier structure are called hybrid models and show reasonable performance as likelihood and discriminative models Chen et al. (2019); Nalisnick et al. (2019b). Unfortunately, the discriminative performance of these models is lower than traditional methods. We utilize hybrid models with constraints that enable tractable integration. Other hybrid models architectures are less restricted but prohibits practical integration.
Out of Distribution Detection OOD detection is a challenge for many modern ML algorithms. Recent work has demonstrated that OOD data can achieve higher likelihoods than the training data Hendrycks et al. (2019); Nalisnick et al. (2019a). Several recent methods have been developed to detect OOD examples including Maximum Softmax Probability (MSP) Hendrycks & Gimpel (2017), Outlier Exposure (OE) Hendrycks et al. (2019), Multiscale Score Matching (MSMA) Mahmood et al. (2021), ODIN Liang et al. (2018), and Certified Certain Uncertainty (CCU) Meinke & Hein (2020). Lee et al. (2018) utilize a GAN trained with a classifier to produce examples near but outside the training distribution. These methods are constructed specifically for OOD detection whereas our method is applicable to a variety of problems.
Semi-supervised Learning Semi-supervised learning is a burgeoning research area that learns from a large data corpus when only a small subset are labeled. The problem setting is very pertinent as it is often easy to acquire data examples but can be extremely time consuming to create the corresponding labels. Many modern methods can achieve very strong performance with very few labels Zhang et al. (2018); Chen et al. (2020). However, most of these methods rely on domainspecific augmentation strategies that are difficult to replicate in new data regimes, i.e., for non-image data. Fortunately, methods such as SSVAE Kingma et al. (2014) and VIME Yoon et al. (2020) are domain agnostic. These methods are built exclusively for semi-supervised learning but we only require an additional regularizing penalty to achieve comparable performance on tabular datasets.
7 EXPERIMENTS
All models were constructed using PyTorch (Paszke et al., 2019), trained using PyTorchLightning (Falcon, 2019), utilized bijectors and distributions from Pyro (Bingham et al., 2018), and were trained using Adam (Kingma & Ba, 2015). We assess the integrable model’s performance in a semi-supervised regime and against OOD examples. See Appendix B and C for additional experiments and training details.
7.1 SPIRALS
We construct a synthetic, 2D dataset composed of three intertwined spirals, see Fig. 4a. Suppose that, due to an unknown sampling bias, we only observe two of the three arms (missing the green arm) and constructed our model as a two-class problem with a third, reject class.
We sample points in a dense 2-dimensional grid at twice the range of the data in both dimensions and evaluate the probability that each point belongs to the two known classes and the OOD class. Figures 4d and 4e illustrate the probability of belonging to the two known classes and Fig. 4f contains the probability that each point is OOD. Red and white values indicate high and low probabilities, respectively. The model successfully identifies the two in-distribution regions. Unsurprisingly, the model also identifies data outside of the training data range as being OOD and, impressively, identifies the unobserved spiral and the region between spirals as being OOD.
Finally, we sample a square box (orange) around a data point (blue) in the latent space and invert the box to assess the appropriateness of the neighborhood in the input space and plot the result, Figs. 4b and 4c. As expected, the bijector converts the latent Euclidean neighborhood into a semantically meaningful neighborhood. Had we included the local cross-entropy integral in this training process, all points within the orange boundary would have received the same label as the data point.
7.2 OUT OF DISTRIBUTION DETECTION
We test how our models respond to OOD examples when trained with global integrals over the noisecontrastive prior and consistency integrals and compare to methods designed cfor OOD detection. See Appendix B.3 for standard performance and Sec. 6 for a discussion of the baselines. Table 1 contains the AUPR for the various methods against similar but different datasets (see Appendix B.4 for the AUROC). We juxtapose SVHN vs CIFAR10 and MNIST vs FMNIST plus Extended MNIST (EMNIST) (Cohen et al., 2017). We include a separable, hybrid model without integral regularizations (hybrid) and the same model with regularizations (Int. Reg.) to illustrate the utility of learning over hypervolumes. When MNIST or FMNIST are the in-distribution set, the regularized, integrable network performs on par with the best baseline methods. SVHN and CIFAR10 show reasonable OOD performance but are not as strong as the baselines. This is not surprising since the integrals rely on a reasonable estimate of p(x), which both datasets have failed to achieve, Table 5.
7.3 SEMI-SUPERVISED LEARNING
We apply the local integral to the separable hybrid model and train on several tabular datasets with 10% of the labels and domain-specific augmentation strategies are unavailable. These models do not utilize the OOD class or global penalty. We apply pseudo-labelling with thresholds of 0.9-0.95. Table 2 compares the perfor-
mance of the integrated hybrid models to several standard (non-image) semi-supervised baselines on flat MNIST (MNIST, flattened to a vector), MiniBooNE Bazarko (2001), and HepMass Baldi et al. (2016). We utilize a CNF Grathwohl et al. (2019) as the bijector and the hinge as the classifier. We see that the integrated model achieves similar-to-better performance than the other methods on all datasets, showcasing the generality of integrated regularizers in different problem spaces.
8 CONCLUSIONS
In this work, we develop architectures that enable tractable integration over the data space. We demonstrate that the ability to supervise regions and not isolated points encourages the model to learn better representations and be less likely to degenerate. We consider several formulations that allow us to regularize the model’s behavior based on consistency and contrast without relying on augmentations of and sparse comparisons between a finite collection of data points. We experiment with the various integrals to obtain promising out-of-distribution detection. Through now tractable integrals, this work enables future methods and applications for learning over continuous regions.
ACKNOWLEDGMENTS
This research was partly funded by grants NSF IIS2133595 and NSF 2113345 and by NIH 1R01AA02687901A1.
A PROOFS
A.1 ADDITIVELY SEPARABLE FUNCTIONS
Theorem A.1 (Additive Independent Integration). Given an additively separable function, f(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [f (v)] = M∑ m=1 ∫ Dvm fm (vm) pm (vm) dvm (A1)
Proof. ∫ fn (vn) pm (vm) dvm = { fn (vn) , if n 6= m∫ fn (vn) pn (vn) dvn, else
(A2)
Ev∼p(v) [f (v)] = ∫ D ( M∑ n=1 fn (vn) )( M∏ m=1 pm(vm) ) dv (A3)
= ∑ n ∫ D fn (vn) M∏ m=1 pm(vm) dv (A4)
Substituting Eq. A1 into Eq. A3 reduces the M -dimensional integral over independent likelihoods into a 1-dimensional integral and completes the proof.
A.2 MULTIPLICATIVELY SEPARABLE FUNCTIONS
Theorem A.2 (Multiplicative Independent Integration). Given a multiplicitively separable function, g(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [g (v)] = M∏ m=1 ∫ Dvm gm (vm) pm (vm) dvm (A5)
Proof. Since each component of gn is matched to a component in pm, the two products can be recombined to isolate like-dimensions. Each dimension can then be integrated independently.
Ev∼p(v) [g (v)] = ∫ dv ( M∏ n=1 gn(vn) )( M∏ m=1 pm(vm) )
= ∫ · · · ∫ ( M∏
n=1
gn(vn)pn(vn) ) dv1 · · · dvM
= ∫ g1(v1)p1(v1)dv1 · · · ∫ gM (vM )pM (vM )dvM
B ADDITIONAL EXPERIMENTS
All image datasets were trained using Glow as the bijector with the number of levels adjusted for the size of the image. MNIST and Fashion MNIST were trained with Adam for 50 epochs with an exponentially decaying learning rate of γ = 0.75. SVHN and CIFAR10 utilized a similar training procedure but with 75 epochs and a slower exponential decay of γ = 0.99. We find that utilizing an average instead of a sum over separable features and soft-thresholding at −1 improves training stability.
B.1 ADVERSARIAL ROBUSTNESS
We test the efficacy of the local cross-entropy integral by utilizing a synthetic dataset from (Tsipras et al., 2019). The data is constructed via
y ∼ {−1, +1}, x1 = {
+y, w.p. p −y, w.p. 1− p , x2, ..., xM+1 ∼ N (ηy, 1) (B1)
and the adversarial examples are constructed analytically by, essentially, flipping the sign of η. We utilize this synthetic dataset to test adversarial robustness because it provides an upper bound on the adversarial accuracy relative to the standard accuracy which allows us to quantify our performance with respect to an absolute. Additionally, because adversarial examples can be constructed exactly without relying on an optimization procedures, there can be no concerns that this defense relies on obfuscated gradients (Athalye et al., 2018). We refer the interested reader to the original paper for a thorough discussion.
We choose M = 10, p = 0.95, and η = 2/ √ M . For these choices, a strong classifier will obtain near perfect standard accuracy with zero adversarial accuracy while a robust classifier will obtain 95% standard and adversarial accuracy. We use a similar architecture to that found in Sec. C.1 except we utilize ten blocks instead of five. We train one model with only the standard crossentropy and negative log-likelihood losses and a second model with the additional global and local cross-entropy integration loss. We find that the model with only the standard losses achieves reasonably good standard performance and fairly poor adversarial
accuracy. The model with the additional integration losses contains considerably better adversarial performance, nearing the upper and, necessarily lower, standard accuracy bound. Table 3 summarizes our results and the ideal performances.
B.2 TOY SEMI-SUPERVISED REGRESSION
To illustrate the utility of global regularizations, we construct a one-dimensional, semi-supervised problem with x ∼ N (0, 4) and y = tanh(x). We keep all values of x but remove y values corresponding to negative x during training. Unlike the standard semi-
supervised problem, the missing labels are not random but are the result of limitations or bias in the data collection process. We train two models that are identical except that the integrated model includes a penalty over E [Ω (ŷ (x))] based on foreknowledge that the average value is zero. Table 4 shows how the models perform over the supervised and unsupervised regions. The inclusion of the integration regularizer allows the model to reason about regions that it has not directly observed and has decreased the error in that region by two orders of magnitude.
B.3 STANDARD PERFORMANCE
We test the impact of integrable models on OOD performance against several standard image classification datasets: MNIST (Deng, 2012), Fashion MNIST (FMNIST) (Xiao et al., 2017), SVHN (Netzer et al., 2011), and CIFAR10 (Krizhevsky et al., 2009). See Appendix B for architures, distributions, and training parameters. Table 5 contains the validation standard accuracy and bits per dimension for all datasets with all integration regularizers. The upper portion of the table is averaged over 10 runs and contains a reject option in the separable model. The lower portion is a single run without a reject option or any integrated regularizers. We find that the model performs reasonably well for MNIST and
FMNIST but the integrated losses cause a large degredation in performance for SVHN and CIFAR10. The removal of these losses produces similar accuracies but much-improved BPD, consistent with the hybrid network results reported by ResidualFlows (Chen et al., 2019) when Glow is used.
B.4 OUT OF DISTRIBUTION DETECTION AUROC COMPARISONS
The separably model and independent latent dimensions allows us to reason about how the model is making decisions in the latent space. Unfortunately, for most bijectors, it is not possible to carry this interpretability back to the input space. Figure 5 demonstrates the final state of the model trained on Fashion MNIST for several features with respect to the latent distribution, per in-distribution class and juxtaposed with the out of distribution data. Specifically, the top row contains the logit components, fk,m, learned by the separable network, color-coded by class; the middle row contains the distribution of each class (colors matched to logits); and the bottom row contains the distribution of the in-distribution data (blue) and OOD data (red). The top row illustrates how the classification network makes its decisions per feature over the distribution presented in the middle row. We see that the logit components map well to the distribution of the data in each feature and provides some intuition for how certain the classifier is over a feature. This demonstrates how this architecture allows for reasonable interpretibility from the latent space. Any value above zero (the dotted black line) is considered in-distribution and the most likely class is the line with the greatest value at that point. The features were chosen to demonstrate the diversity of the learned solution over features. Generally, the data maps reasonably well to the bimodal distribution, though we do occassionally see mode collapse as in Fig. 5e. Fortunately, in these cases the logits tend to be fairly uninformative and only introduce a small bias. Figures 5a through 5c show a common trend where the OOD data has heavy overlap with one of the two clusters but not the other. While we do see some diversity amongst the in-distribution classes that overlap with the OOD data the “bags” (gray) and “sandals” (cyan) class overlap most often. Finally, Fig. 5d demonstrates a latent feature where the OOD data falls in the region between the two data components.
B.6 MNIST LEAVE-ONE-OUT
Following (Ahmed & Courville, 2020), we test the robustness of our method against semantic OOD examples by training models on all but one class from MNIST (Deng, 2012) and assessing OOD performance on the held out class. We repeat the process for each class and report the AUROC and AUPR. In all cases, we train using ten different random seeds and report the average performance and standard deviation across seeds that do not degenerate against the in-distribution validation set. Table 8 and Table 7 contain the results of our method using both integrated losses compared to several relevant baselines. Results from Efficient GAN-Based Anomaly Detection (EGBAD) (Zenati et al., 2018) and GANomaly (Akcay et al., 2018) are taken by estimating the results from figures in both works. The Semantic Anomalies (Ahmed & Courville, 2020) results are obtained by executing the official version of the code using the rotation auxiliary task with two different batch sizes (128 and 256). The Semantic Anomalies results in both tables are the best across both batch sizes and detection methods (ODIN (Liang et al., 2018) and MSP (Hendrycks & Gimpel, 2017)) based on the AUPR.
We see that the Semantic Anomalies generally achieves the best AUROC across all digits. The integration often achieves the best AUPR but struggles with certain held out digits. In particular, the integration performs significantly worse on the “4” and “9” digits. This is a reasonable confuser due to the similarities and variability of the two classes. These results indicate that the integration penalties are helpful for inducing the model to detect semantic anomalies.
C TRAINING AND ARCHITECTURES
The bijective layer is composed of five blocks where each block contains an affine-coupling layer (Dinh et al., 2017) and an invertible fully connected layer. The data distribution is bimodal over both latent dimensions with means at ±1 and standard deviations of 0.4. The separable network is composed of quadratic functions. We train the model as discussed, using the standard cross-entropy
loss for each observed point, the negative log-likelihood over the input space, and the global crossentropy integration with respect to the contrastive prior. The learning rate is set to 0.001 with standard weight decay of 1e-4 over 20 epochs using Adam (Kingma & Ba, 2015).
C.2 FASHION MNIST
The bijective layers are composed of the Glow (Kingma & Dhariwal, 2018) architecture with two levels composed of 32 steps and 512 channels. We utilize an open-source Glow implementation1 wrapped in Pyro’s (Bingham et al., 2018) bijective operators. The data distribution is bimodal over all latent dimensions with means at±1 and standard deviations of 0.5. The noise constrastive distribution is a standard Gaussian. The separable network is composed of hinge functions (see Sec. 4.3). We utilize a batch size of 256, a learning rate of 1.5e-4 over the bijective layers, and a learning rate of 1.0e-3 over the separable layers using Adam (Kingma & Ba, 2015). Both learning rates are exponentially decayed at a rate of 0.75 per epoch. Standard weight decay is applied with a weight of 1e-4. The network is trained for 50 epochs. The adversarial attacks are performed against the model at the epoch with the lowest validation cross-entropy loss.
D APPROXIMATE EXPECTED CROSS-ENTROPY OVER A DOMAIN
We desire the expected cross-entropy, CE(ŷ(x), yc), over a domain Dc where yc is the desired probability vector over classes, ŷ(x) is the model’s prediction of y ∈ RK from x ∈ RM and is composed of a bijective function, h, a separable function, fk(x) = ∑ m fk,m(xm), and the soft-max function, σ, such that ŷ(x) = σ(f(h(x))). If we are attempting to regularize the model to produce a single class label over Dc, then y will correspond to a one-hot vector. In general, however, yc may be a dense vector with elements yk. The expected cross-entropy is expressed as
EDc [CE(ŷ(x), yc)] = − K∑ k=1 yk EDc [log(ŷk(x))] = − K∑ k=1 yk ∫ Dc log(ŷk(x)) pDc(x|y)dx (D1)
where pDc(x|y) is the probability density function over Dc conditioned on y. We can take advantage of the bijective transform to convert this expression into an expectation over the latent space, z, and latent domain, Zc, which allows us to write
K∑ k=1 yk EDc [log(ŷk(x))] = K∑ k=1 yk EZc [log(σ(f(z))] = K∑ k=1 yk ∫ Zc log(σk(f(z))) pZc(z|y)dz
(D2)
where σk is the kth element after the soft-max normalization and pZc(z|y) is the independent probability density function in the latent space, e.g., pZc(z|y) = ∏ m pm(zm). We can then expand
1https://github.com/chrischute/glow
the soft-max operator within the expectation to get
K∑ k=1 yk EZc [log(σ(f(z))] = −EZc log K∑ j=1 exp(fj(z)) + K∑ k=1 yk EZc [fk(z)] (D3)
The combination of the separable function and independent densities allows for an exact simplification of the second term into
K∑ k=1 yk EZc [fk(z)] = K∑ k=1 yk M∑ m=1 ∫ Zm fk,m(zm)pm(zm)dzm (D4)
≈ 1 G K∑ k=1 yk M∑ m=1 G∑ g=1 fk,m(z̃m,g)
where we have overloaded Zm to correspond to the domain of the mth latent dimension and have approximated the integral via Monte Carlo integration with z̃m,gp̃m(zm) and G draws. The first term on the right-hand side of Eq. D3 does not enjoy the same exact simplification. However, it is possible to bound the term via Jensen’s Inequality by moving the logarithm into the sum over j or outside the expectation operator. We consider the latter case where
EZc log K∑ j=1 exp(fj(z)) ≤ log EZc K∑ j=1 exp(fj(z)) . (D5) Then, we expand the separable function, exchange the order of the sum over dimensions and the exponent, take advantage of the multiplicative integral simplification (Eq. A5), approximate via Monte Carlo integration, and utilize the log ∑ exp operator for stability:
log EZc K∑ j=1 exp(fj(z)) = log EZc K∑ j=1 exp( M∑ m=1 fj,m(zm)) (D6) = log
K∑ j=1 ∫ Zc dz M∏ m=1 exp(fj,m(zm)) M∏ n=1 pn(zn) = log
K∑ j=1 M∏ m=1 ∫ Zm exp(fj,m(zm))pm(zm)dzm ≈ log
K∑ j=1 M∏ m=1 G∑ g=1 exp(fj,m(z̃m,g)) −M log(G) = log
K∑ j=1 exp M∑ m=1 log G∑ g=1 exp(fj,m(z̃m,g))−M log(G).
Finally, we can substitute Eq. D4 and Eq. D6 into Eq. D1 to get
EDc [CE(ŷ(x), yc)] ≤ − K∑ k=1 yk M∑ m=1 ∫ Zm fk,m(zm)pm(zm)dzm (D7)
+ log K∑ j=1 M∏ m=1 ∫ Zm exp(fj,m(zm))pm(zm)dzm ≈ − 1
G K∑ k=1 yk M∑ m=1 G∑ g=1 fk,m(z̃m,g)
+ log K∑ j=1 exp M∑ m=1 log G∑ g=1 exp(fj,m(z̃m,g))−M log(G)
In the event that yc is a one-hot vector (at the cth element) this simplifies to
EDc [CE(ŷ(x), yc)] ≤ − M∑ m=1 ∫ Zm fc,m(zm)pm(zm|c)dzm (D8)
+ log K∑ j=1 M∏ m=1 ∫ Zm exp(fj,m(zm))pm(zm|c)dzm where we reintroduced the condition over the class within the probability densities pm(zm|c) for emphasis. | 1. What is the focus and contribution of the paper regarding integrals across input space?
2. How does the proposed method apply separable functions to latent variables, and what are the benefits of doing so?
3. What are the strengths and weaknesses of the empirical results, particularly for OOD detection and adversarial attacks?
4. Are there any concerns or suggestions regarding the experimental design and presentation?
5. Do you have any questions about the notation and formula usage throughout the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces a hybrid model architecture that makes it possible to integrate a separable loss function across a region of input space. Such integrals can be used as regularizers for robustness near the observed data points (local consistency), and out-of-distribution (OOD) detection in neighborhoods away from the observations. The paper uses these regularizers to train models that are less vulnerable to adversarial attacks and out of distribution mishaps.
Review
The method uses a bijective flow network to map dependent input features to a set of independent latent variables. It then applies a separable function on the latent variables to get a loss function (or an approximation). The combination results in an integral that decomposes into separate 1-d integrals across each dimension. This makes it possible to calculate the gradient of the loss function efficiently across the latent domain.
The empirical results are mixed. The use of regularizers based on the decomposable integrals is shown to be beneficial for OOD detection, but the results cannot always match the baseline (likely due to the constraints on the network structure). The results of experiments with adversarial data (in the appendix) look less interesting and might suggest that the method is not effective against targeted adversarial attacks. The description of the experiments is minimal in the main body of the paper and in parts hard to follow.
Note: I was a reviewer to an earlier version of this work. Compared to the previous version, the writing is improved in the parts and the experiments are expanded and rearranged.
Minor notes:
It’s more accurate to present the sample complexity of the separable integral as
O
(
G
M
)
when comparing to
O
(
G
M
)
.
Theorem names do not match between the main body and the appendix.
The notation in section 5.3 does not match the discussion on separable functions in section 3.2.1. In particular, it is better not to use
x
m
as the argument (maybe
z
m
or
v
m
). It’s also better to make the parameters of the network explicit in the formulas (11): e.g.
f
k
,
m
(
z
m
;
u
k
,
m
ν
k
,
m
)
=
…
Add names to the first two columns of Table 2. |
ICLR | Title
Practical Integration via Separable Bijective Networks
Abstract
Neural networks have enabled learning over examples that contain thousands of dimensions. However, most of these models are limited to training and evaluating on a finite collection of points and do not consider the hypervolume in which the data resides. Any analysis of the model’s local or global behavior is therefore limited to very expensive or imprecise estimators. We propose to formulate neural networks as a composition of a bijective (flow) network followed by a learnable, separable network. This construction allows for learning (or assessing) over full hypervolumes with precise estimators at tractable computational cost via integration over the input space. We develop the necessary machinery, propose several practical integrals to use during training, and demonstrate their utility.
1 INTRODUCTION
Most supervised learning problems operate by training a model on a finite collection, T , of N (typically paired) examples, (x, y). The model is updated by comparing its predicted output to the expected output and performing some flavor of stochastic gradient descent based on the comparison and various regularizers. The process is repeated by reexamining the elements of T in a random order, possibly with augmentation, until the model parameters converge or an iteration budget is exceeded. This relatively simple procedure has proven to be remarkably effective in a variety of domains and these models have begun to permeate every aspect of modern science and everyday life (He et al., 2016; Silver et al., 2017; Brown et al., 2020).
The deep learning revolution has also resulted in highly effective generative models such as VAEs (Kingma & Welling, 2014), GANs (Goodfellow et al., 2014), and tractable likelihood models (Dinh et al., 2017; Oliva et al., 2018; Grathwohl et al., 2019). These models are largely used to create novel samples of impressive quality. In addition to sampling, likelihood models provide an estimate of the probability density function of the data which can be used for additional, downstream processes.
We augment the training process by constructing neural networks that allow for tractable integration over the input domain. This differs from implicit layers which utilize integration over a parameterized variable (Chen et al., 2018; Grathwohl et al., 2019). Access to fast and differentiable integrals allows us to regularize a model’s average behavior using metrics that may not be available otherwise. Integration over the input space also allows us to supervise how the model behaves in continuous regions that are not directly observed in T and may even be out-of-distribution (OOD). Alternative methods attempt to supervise examples outside of T by performing random perturbations (Gutmann & Hyvärinen, 2010), along the line between known examples (Zhang et al., 2018), or via a generative process (Zenati et al., 2018; Akcay et al., 2018). However, these methods are only capable of observing a small quantity of the total space. By integrating over entire regions, it is possible to observe a large portion of the space based on statistical relevance.
Main Contributions We propose a general architecture that enables tractable integration over the input space, enabling supervision and custom regularization over continuous regions. We demonstrate how to construct this network and how it allows for a reduction in the computation cost required for dense numeric integration from exponential in the number of dimensions to linear. We derive several useful integrated formulations over continuous regions. Finally, we explore the impact of this architecture and regularizers on the standard accuracy and robustness to OOD examples on several standard classification datasets. The code utilized in this paper can be found at https://github.com/lupalab/sep_bij_nets.
Notation Throughout this work, we consider theM dimensional input features, x = [x1, ..., xM ] ∈ RM ; the latent features, z ∈ Z ⊆ RM ; and the K-wise classification probability, y. The input features x the training set, Tx, are drawn from the in-distribution data, D ⊆ RM . Subscripts represent a particular dimension, e.g., Dm corresponds to the mth dimension of the space. Paranthetical superscripts represent the subspace corresponding to a particular class, e.g., D(c) is the subset of D where the data belongs to class c. The bijective network is given as h such that h : D → Z . Probability distributions overD and Z are given by p with the corresponding subscript. Classification networks are given as f and g. Variables with a “hat,” ŷ, are predictions of the true quantity, y.
2 MOTIVATION
Neural networks are highly effective function approximators between two (typically) continuous spaces: f : X → Y . However, networks are typically trained and evaluated using a finite collection of points without any explicit assessment of the complete hypervolume spanned by the data. This omission is understandable from an implementation perspective as the number of samples required to obtain a reasonable estimate over a volume scales exponentially with data dimensionality. However, human beings often have an understanding of how a process should behave on average. Ideally, we would like to embed this intuition into the model but currently cannot assess the average performance of a trained model outside of the held-out test set. Specifically, we would like to regularize the model by estimating the expected behavior of some metric, Ω, produced by the model over the training data
Ex∼p(x) [Ω(ŷ(x))] = ∫ X Ω(ŷ(x))p(x)dx. (1)
There are many useful choices of Ω over a variety of applications. If it is known what the model output should be on average (ȳ), we can construct Ω to encourage that behavior, e.g., Ω(ŷ) = (ȳ− ŷ)2. Minimizing consistency metrics (Xie et al., 2020) are a common method to improve learning in label-starved problems. These encourage the model to produce similar outputs over neighborhoods around (labeled) examples from Tx where neighborhoods are created by random or domain-specific augmentations. This process can be viewed as an approximation to an integral over the neighborhood,
E ∼p( ) [L(y, ŷ(x + ))] = ∫ L(y, ŷ(x + ))p( )d (2)
where L is a distance-like metric, and is the neighborhood. Equation 2 can be generalized to other neighborhoods. We can recast the standard classification problem as a discrete approximation to an integral. Typically, we minimize the cross-entropy between ŷ(x; θ) and y over the model parameters, θ, for all (x, y) ∈ T which becomes an integral over class-conditioned distributions, p(x|c),
min θ − ∑ x,y∈T ∑ k yk log (ŷk(x; θ))⇒ min θ − ∑ k ∫ D(k) yk log (ŷk(x; θ)) p(x|k)dx. (3)
Unfortunately, integration in high dimension is difficult. Naive gridded solutions require an exponential number of points with error decreasing as O(G−M ), for G, M -dimensional points. Monte Carlo (MC) methods theoretically have better performance with error that decreases asO(G−1/2). However, the rate of convergence for MC methods depends on the variance of samples (Veach, 1998), which may make for poor approximations in practice. Importance sampling (Bishop, 2006) can improve the performance of Monte Carlo methods to adapt to the regions with the greatest contribution.
We choose to model the data using a separable function. Separable functions have the key benefit of decomposing M -dimensional integrals into a combination of M one-dimensional integrals. Each
of these one-dimensional integrals can then be solved using any number of highly-accurate solvers (e.g., Runge-Kutta (Shampine, 2005), Dormand-Prince (Dormand & Prince, 1980), Tsit (Tsitouras, 2011), etc.) that have error rates better than O(G−4) but are unavailable in high dimensions. The use of separable functions is a component of the VEGAS algorithm (Peter Lepage, 1978) and is utilized in conjunction with adaptive Monte Carlo sampling to approximate high-dimensional integrals.
The use of a separable function over the input space may make estimation of integrals over the model more accessible; however, they impose a strong, inappropriate inductive-bias. The obvious approach of utilizing a standard neural network as a feature extractor and integrating over learned features means that we would no longer have a valid integrator over the input space. We propose to solve this problem by utilizing bijective transforms prior to the separable network. The bijective transform allows us to decouple the data into a latent space where the data can be modeled using a separable network and guarantees equality between integrals in the latent space and integrals in the input space.
3 BACKGROUND
We perform integration over the input space by splitting neural network models down into two key components: (1) a bijective feature extractor, (2) a separable task network, see Fig. 1. For simplicity, we only consider classification tasks in this work. This makes our total network analogous with the common architecture where a classifier, often a linear layer or an MLP, is constructed on a feature extractor, such as a CNN. Unlike the typical process, we must place constraints on both networks so that we can integrate over the input domain. This network breakdown is similar to hybrid networks (Chen et al., 2019; Nalisnick et al., 2019b) except for the separability requirement on the classifier.
3.1 BIJECTIVE NETWORKS
Bijective networks are the key component in flow-based likelihood models. A bijective network, h : D → Z , has a known forward and inverse operation so that data can exactly be reconstructed after the transformation. This allows for exact likelihood estimation via the change of variables formula:
z = h(x; θ), x = h−1(z; θ), pX(x) = pZ(h(x; θ)) ∣∣∣∣∂h∂x ∣∣∣∣ (4)
where pZ is a predefined distribution over the latent space, often a standard Gaussian.
These models are trained via Eq. 4 to maximize the likelihood of the examples in T . Once trained, flow-based likelihood models are commonly used as a generative process where samples are drawn from pZ and are then inverted through h to arrive at an example in D. Instead, we will take advantage of the fact that we can choose pZ and then utilize it in downstream tasks. Normalizing flow bijectors provide rich feature extractors that can represent a distribution of complicated inputs with simplydistributed, independent features, z. Given the independence and expressibility of these learned independent features, we build estimators using separable functions over z, which enables us to integrate over the data’s domain while retaining expressibility.
The requirement for bijectivity places a strong constraint on network design, eliminating many common choices due to the need to maintain dimension or invert element-wise activations. Even naive convolutional operations become unavailable since they are not generally invertible. Modern advances have demonstrated methods to work around these limitations through the use of clever partitioning and coupling tricks Dinh et al. (2017) or the use of constraints Chen et al. (2019). However, the field of learnable bijective functions is less advanced then its unconstrained counterparts which results in reduced performance on auxiliary tasks. We utilize Glow Kingma & Dhariwal (2018) to process image data and continuous normalizing flows Grathwohl et al. (2019) for tabular data.
3.2 SEPARABLE FUNCTIONS
Separable functions have long been used in mathematics and physics to solve simple partial differential equations such as the homogeneous wave and diffusion equations (Strauss, 2007). We consider two types of separable functions, additive and multiplicative. All proofs can be found in Appendix A.
3.2.1 ADDITIVELY SEPARABLE FUNCTIONS
Definition 3.1. A function, f : CM → CK , is additively separable if it is composed as a summation of element-wise functions operating independently on each dimension of the input:
f (v) = M∑ m=1 fm (vm;φm) (5)
Theorem 3.1 (Additive Independent Integration). Given an additively separable function, f(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [f (v)] = M∑ m=1 ∫ Dvm fm (vm) pm (vm) dvm (6)
3.2.2 MULTIPLICATIVELY SEPARABLE FUNCTIONS
Definition 3.2. A function, g : CM → CK , is multiplicatively separable if it is composed as a product of element-wise functions operating independently on each dimension of the input:
g (v) = M∏ m=1 gm (vm;ψm) (7)
Theorem 3.2 (Multiplicative Independent Integration). Given a multiplicitively separable function, g(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [g (v)] = M∏ m=1 ∫ Dvm gm (vm) pm (vm) dvm (8)
4 METHOD
Both forms of separable functions allow us to decompose a single M -dimensional integral into M 1-dimensional integrals. A dense estimation of the integral without taking advantage of a separable function usingG points per dimension would requireO(GM ) network evaluations. This is completely impractical for modern datasets where M is at least on the order of hundreds and G should be as large as possible. Exploiting separable functions allow us to reduce the complexity to O(GM). However, it is unlikely that we could construct a separable function directly on the input space and achieve reasonable performance. Doing so would essentially require that each input dimension contribute to the final output independently of the others. Instead, we can combine Eq. 4 and Eq. 6 (alternatively, Eq. 8) to perform practical integration over the input domain while still allowing for inter-dependent contributions from each input dimension. To do so, we first let the latent distribution, pZ , be independently (but not necessarily identically) distributed: pZ(z) = ∏ m pm(zm). This allows us to write the integral over the input space in terms of the latent space and then simplify via Eq. 6.∫ D f(h(x))pZ(h(x)) ∣∣∣∣∂h∂x ∣∣∣∣ dx = Ex∼pX(x) [f(h(x))]
= Ez∼pZ(z) [f(z)] = ∫ Z pZ(z)f(z)dz = M∑ m=1 ∫ Zm fm(zm) pm (zm) dzm (9)
Each 1-dimensional integral can be easily approximated using a variety of integration approaches. In addition to the requirements placed on the feature extractor and task network, this formulation also
requires that we define the integration domain in the latent space as a Cartesian product of domains over each latent dimension. This may seem like a strong requirement since we apparently lose some level of interpretability; however, there are several advantages to defining the input domain in this learned space. Most notably, we know the data distribution in this space exactly and can tune the domain based on the goal of a particular integral.
Figure 2 contains a cartoon example that demonstrates how these methods combine to allow for integration over the complex data space. Both plots show the value of f ; Fig. 2a shows the nonseparable function in the input space and Fig. 2b shows the same data in the (separable) latent space, after the bijective transformation. The colored points and dotted lines are in correspondence between the two spaces and illustrate how the space is warped by the bijector to create a separable, independent latent space. The light gray lines represent the contours of the underlying probability distribution. We define both the integration domain and perform the integration in the latent space. For simplicity, we illustrate the integration using five, equally-spaced points (in both dimensions). The naive procedure would require sampling f at each point in the grid (the blue points). The use of the separable functions allow us to integrate using only the red points and get the same estimate.
5 PRACTICAL APPLICATIONS
In this section we discuss specific choices made when constructing the latent distribution, separable networks, and various integrals. For classification tasks, it is not possible to parameterize the normalized network output of class probabilities, ŷ, as a separable network since each prediction must sum to one. However, it is possible to parameterize each unnormalized logits as a separable network, e.g., ŷ(x) = σ (f(h(x))), where σ is the softmax operator and the output of f are the unnormalized logits. While this limits what quantities can be integrated over exactly, it is still possible to integrate over approximations/bounds without resorting to brute-force methods.
Out-of-Distribution Detection We construct a global integration regularizer to provide resilience to out-of-distribution (OOD) examples. We enable OOD detection by introducing a “reject” option (Hendrickx et al., 2021) into the classification vector, increasing the number of classes by one, where the additional class indicates that the instance is OOD. An example is considered OOD if ŷK+1 exceeds a predefined threshold. We supervise the model over out-of-distribution examples by integrating the cross-entropy over the contrastive probability distribution, q(z) (see Sec. 5.3.1).
Semi-supervised Learning We build a local integral to enhance consistency within latent neighborhoods and utilize it in place of pre-chosen augmentation strategies to inform a label-starved dataset. Specifically, we introduce a loss where all examples near a real, labeled example are regularized to share the real example’s label as in Eq. 2 (see Sec. 5.3.2). We additionally use pseudo-labels Lee (2013) so that consistency is maintained about unlabelled points as the model grows more confident.
5.1 LATENT DISTRIBUTIONS
A critical component of this method is that the likelihoods over the latent space must be independent: pZ(z) = ∏ m pZ(zm). As this is extremely unlikely to occur in the input space, we learn a bijective (flow) transformation from the input space to a latent space where the likelihood can be decomposed into independent components of our choice. Since we would like to use the latent features to discriminate between classes using a separable function, we choose to utilize a (bimodal) Gaussian mixture model. This allows the model to put
different classes into one of two “buckets” per feature and enables classification through multiple features. This results in an exponential number of components (with respect to dimension) in the full latent space, e.g., 2M components. However, we can easily offset this by choosing some dimensions to use a unimodal latent distribution while the remaining dimensions are still bimodal:
pZm(zm) =
{ 0.5 N (−µ, σ21) + 0.5 N (µ, σ21), if m ≤ L
N (0, σ22), else (10)
In addition to the typical latent data distribution, pZ(z), we explicitly include a contrastive distribution, q(z). This distribution is critical if we desire to regularize our model outside the data distribution, i.e., setting some background/default behavior. When using a bimodal data distribution, we additionally desire that latent vectors are more likely under this distribution once the vector is sufficiently far away from any single in-distribution mode. Therefore, we select this distribution so that it fully encapsulates all the components of the data distribution, e.g., qm(zm) > pZm(zm) for | |zm|−µ| > γ. When the data distribution is unimodal we choose pm = qm so that the feature is uninformative.
We utilize a standard Gaussian for the contrastive distribution and set µ to 1 and experiment with σ1 ≤ 0.5. This choice allows us to formulate OOD examples that exist between the in-distribution data as near zero, and outside the in-distribution data, near the tails. Figure 3 illustrates the data and contrastive distributions for a latent feature for convenience.
5.2 SEPARABLE NETWORKS
We explore three different formulations of separable networks. The first formulation learns a distinct quadratic function, the second learns a symmetric hinge, and the third learns a multi-layer perceptron. These functions have distinct parameterizations for each latent feature and each in-distribution class. The out-of-distribution logit is held fixed at zero: lK+1(x) = 0.
f (quad)k,m (zm;αk,m, uk,m, νk,m) = αk,m − (zm − uk,m)2
2e2νk,m (11)
f (hinge)k,m (zm;αk,m, uk,m, νk,m) = αk,m − |zm − uk,m| e νk,m (12)
where k is the logit class index and m is the feature dimension so that the total unnormalized logit, lk(z), is lk(z) = ∑ m fk,m(zm). The separable MLP is constructed by concatenating a learned vector per feature and class to each 1-dimensional feature and constructing a standard MLP that operates on that augmented state. This formulation provides the greatest flexibility as it leverages all the benefits of the universal approximator theorem (Hornik, 1991). Unfortunately, we find the MLP version difficult to train in practice, often resulting in degenerate solutions. Fixing the OOD logit at zero provides a fixed reference point for the other logits. We interpret this as each latent feature voting on how in-distribution the example is on a per-class basis.
5.3 INTEGRALS
The cross-entropy, CE(ŷ(x), y), between the normalized model output, ŷ(x), and the true labels, y, is commonly used to train classification models. In this section, we explore global and local integrals of the cross-entropy loss used to train most classifiers (see Eq. 3) when ŷ is given as the composition of the softmax function, a separable function and a bijective function, e.g., σ(f(h(x))). We can express this as minθ ∑ x,y∈T CE(ŷ(x), y) owing to the finite number of elements in T . Ideally, we would minimize the model cross-entropy over all possible examples in D. We accomplish this by defining the subspace, D(c) ⊂ D, that has a given label and corresponding conditional distribution, p(x|c)
Ex∼p(x|c) [CE (ŷ(x), c)] = −Ex∼p(x|c) [log(ŷc(x))] = − ∫ D(c) log(ŷc(x)) p(x|c)dx. (13) It is not possible to represent ŷ(x) as separable network due to the softmax operation but it is possible to parameterize the unnormalized logits as a separable network. Substituting this parameterization into Eq. 13 and transforming to the latent space yields a bound for the expected cross-entropy
Ex∼p(x|c) [CE (ŷ(x), y)] ≤ − M∑ m=1 ∫ Z(c) fc,m(zm)pm(zm|c)dzm
+ log K+1∑ j=1 M∏ n=1 ∫ Z(c) exp (fj,n(zn)) pn(zn|c)dzn . (14)
See Appendix E for a full derivation. We will utilize this formulation in conjunction with a contrastive prior to supervise OOD examples, Sec. 5.3.1, and with a local prior to enforce consistency, Sec. 5.3.2.
5.3.1 OUT-OF-DISTRIBUTION SUPERVISION
We can utilize the cross-entropy integral in different ways by choosing the label we would like to apply over a domain. For example, we can integrate the cross-entropy over the OOD latent space, U , with the true label fixed to the reject class (c=K+1), and the latent distribution over codes is the contrastive distribution (Sec. 5.1), p(z|c=K+1) = q(z); using Eq. 14 with fK+1,n=0:
LGLBL = log K∑ j=1 M∏ n=1 ∫ U exp (fj,n(zn)) qn(zn)dzn . (15) In effect, Eq. 15 discourages OOD data from being labeled as any in-distribution label. Since the data and contrastive distributions overlap, the model could degenerate and always make OOD decisions; however, this would be in conflict with the standard cross-entropy loss applied to each example in the training set. So long as LGLBL is not weighted too strongly, the model achieves equilibrium by making OOD predictions over regions where the contrastive distribution is more likely. In effect, we supervise OOD training without requiring any OOD data.
5.3.2 LOCAL CONSISTENCY
We may also perform integration over neighborhoods of data points in Tx and utilize that point’s label over the entire region. This integral serves to improve consistency around observations. Adversarial attacks (Goodfellow et al., 2015; Madry et al., 2018) are often constructed as perturbations on real data points and adversarial training finds one such point and includes it in the training set. We can interpret local consistency integrations as a form of average adversarial training. We reformulate the integral to be centered around a particular datum, x0 and its class label, y
LLCL = − ∑ k yk ∫ V log(σ (fk(h(x0) + v))) pV (v)dv (16)
A complication resulting from local integration is how to select the local distribution, pV (v), and the neighborhood, V . There are many reasonable choices depending on the goal of the local integration. For simplicity, we choose each Vm ∈ [−ε, ε] and pm(vm) to be uniformly distributed.
5.4 LOSS COMPONENTS
The final training loss used to evaluate the model is composed of different combinations of the standard cross-entropy loss over class predictions, LCE = − ∑ k yk log (ŷ), the negative log-likelihood loss
over in-distribution data points, LNLL = − log(pZ(h(x))) − log ∣∣∂h ∂x ∣∣, and the integration losses, LGLBL and LLCL. We always include LCE and LNLL. Based on results from previous hybrid networks (Chen et al., 2019; Nalisnick et al., 2019b), we weight the negative log-likelihood by 1/M , which is analogous to using bits per dimension and introduce an additional weight over the cross-entropy, λ, which controls the relative importance of the generative and classification tasks. When included, each integration penalty shares the same weight as LCE with additional binary weight, π
Ltotal = 1
M LNLL + λ (LCE + πGLBLLGLBL + πLCLLLCL) . (17)
6 RELATED WORK
Noise Contrastive Distributions Noise contrastive methods introduce a distribution that is distinct from the data distribution. The constrastive distribution can be learned and provides a mechanism to supervise the model to discriminate between true and contrastive samples (Gutmann & Hyvärinen, 2010). This forces the model to learn discriminative statistical properties of the data. We utilize this idea to inform the null-space over which we will integrate. We fix the data and contrastive distributions and learn the bijective map between input and latent spaces.
Hybrid Models Normalizing flows are a family of flexible, tractable likelihood estimators based on the application of learnable, bijective functions Dinh et al. (2017); Grathwohl et al. (2019). These methods have shown strong performance, both as likelihood estimators and as generative processes. Recent work has coupled advances in bijective networks to construct invertible feature extractors that are capped by a more conventional classifier. The combination bijective/classifier structure are called hybrid models and show reasonable performance as likelihood and discriminative models Chen et al. (2019); Nalisnick et al. (2019b). Unfortunately, the discriminative performance of these models is lower than traditional methods. We utilize hybrid models with constraints that enable tractable integration. Other hybrid models architectures are less restricted but prohibits practical integration.
Out of Distribution Detection OOD detection is a challenge for many modern ML algorithms. Recent work has demonstrated that OOD data can achieve higher likelihoods than the training data Hendrycks et al. (2019); Nalisnick et al. (2019a). Several recent methods have been developed to detect OOD examples including Maximum Softmax Probability (MSP) Hendrycks & Gimpel (2017), Outlier Exposure (OE) Hendrycks et al. (2019), Multiscale Score Matching (MSMA) Mahmood et al. (2021), ODIN Liang et al. (2018), and Certified Certain Uncertainty (CCU) Meinke & Hein (2020). Lee et al. (2018) utilize a GAN trained with a classifier to produce examples near but outside the training distribution. These methods are constructed specifically for OOD detection whereas our method is applicable to a variety of problems.
Semi-supervised Learning Semi-supervised learning is a burgeoning research area that learns from a large data corpus when only a small subset are labeled. The problem setting is very pertinent as it is often easy to acquire data examples but can be extremely time consuming to create the corresponding labels. Many modern methods can achieve very strong performance with very few labels Zhang et al. (2018); Chen et al. (2020). However, most of these methods rely on domainspecific augmentation strategies that are difficult to replicate in new data regimes, i.e., for non-image data. Fortunately, methods such as SSVAE Kingma et al. (2014) and VIME Yoon et al. (2020) are domain agnostic. These methods are built exclusively for semi-supervised learning but we only require an additional regularizing penalty to achieve comparable performance on tabular datasets.
7 EXPERIMENTS
All models were constructed using PyTorch (Paszke et al., 2019), trained using PyTorchLightning (Falcon, 2019), utilized bijectors and distributions from Pyro (Bingham et al., 2018), and were trained using Adam (Kingma & Ba, 2015). We assess the integrable model’s performance in a semi-supervised regime and against OOD examples. See Appendix B and C for additional experiments and training details.
7.1 SPIRALS
We construct a synthetic, 2D dataset composed of three intertwined spirals, see Fig. 4a. Suppose that, due to an unknown sampling bias, we only observe two of the three arms (missing the green arm) and constructed our model as a two-class problem with a third, reject class.
We sample points in a dense 2-dimensional grid at twice the range of the data in both dimensions and evaluate the probability that each point belongs to the two known classes and the OOD class. Figures 4d and 4e illustrate the probability of belonging to the two known classes and Fig. 4f contains the probability that each point is OOD. Red and white values indicate high and low probabilities, respectively. The model successfully identifies the two in-distribution regions. Unsurprisingly, the model also identifies data outside of the training data range as being OOD and, impressively, identifies the unobserved spiral and the region between spirals as being OOD.
Finally, we sample a square box (orange) around a data point (blue) in the latent space and invert the box to assess the appropriateness of the neighborhood in the input space and plot the result, Figs. 4b and 4c. As expected, the bijector converts the latent Euclidean neighborhood into a semantically meaningful neighborhood. Had we included the local cross-entropy integral in this training process, all points within the orange boundary would have received the same label as the data point.
7.2 OUT OF DISTRIBUTION DETECTION
We test how our models respond to OOD examples when trained with global integrals over the noisecontrastive prior and consistency integrals and compare to methods designed cfor OOD detection. See Appendix B.3 for standard performance and Sec. 6 for a discussion of the baselines. Table 1 contains the AUPR for the various methods against similar but different datasets (see Appendix B.4 for the AUROC). We juxtapose SVHN vs CIFAR10 and MNIST vs FMNIST plus Extended MNIST (EMNIST) (Cohen et al., 2017). We include a separable, hybrid model without integral regularizations (hybrid) and the same model with regularizations (Int. Reg.) to illustrate the utility of learning over hypervolumes. When MNIST or FMNIST are the in-distribution set, the regularized, integrable network performs on par with the best baseline methods. SVHN and CIFAR10 show reasonable OOD performance but are not as strong as the baselines. This is not surprising since the integrals rely on a reasonable estimate of p(x), which both datasets have failed to achieve, Table 5.
7.3 SEMI-SUPERVISED LEARNING
We apply the local integral to the separable hybrid model and train on several tabular datasets with 10% of the labels and domain-specific augmentation strategies are unavailable. These models do not utilize the OOD class or global penalty. We apply pseudo-labelling with thresholds of 0.9-0.95. Table 2 compares the perfor-
mance of the integrated hybrid models to several standard (non-image) semi-supervised baselines on flat MNIST (MNIST, flattened to a vector), MiniBooNE Bazarko (2001), and HepMass Baldi et al. (2016). We utilize a CNF Grathwohl et al. (2019) as the bijector and the hinge as the classifier. We see that the integrated model achieves similar-to-better performance than the other methods on all datasets, showcasing the generality of integrated regularizers in different problem spaces.
8 CONCLUSIONS
In this work, we develop architectures that enable tractable integration over the data space. We demonstrate that the ability to supervise regions and not isolated points encourages the model to learn better representations and be less likely to degenerate. We consider several formulations that allow us to regularize the model’s behavior based on consistency and contrast without relying on augmentations of and sparse comparisons between a finite collection of data points. We experiment with the various integrals to obtain promising out-of-distribution detection. Through now tractable integrals, this work enables future methods and applications for learning over continuous regions.
ACKNOWLEDGMENTS
This research was partly funded by grants NSF IIS2133595 and NSF 2113345 and by NIH 1R01AA02687901A1.
A PROOFS
A.1 ADDITIVELY SEPARABLE FUNCTIONS
Theorem A.1 (Additive Independent Integration). Given an additively separable function, f(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [f (v)] = M∑ m=1 ∫ Dvm fm (vm) pm (vm) dvm (A1)
Proof. ∫ fn (vn) pm (vm) dvm = { fn (vn) , if n 6= m∫ fn (vn) pn (vn) dvn, else
(A2)
Ev∼p(v) [f (v)] = ∫ D ( M∑ n=1 fn (vn) )( M∏ m=1 pm(vm) ) dv (A3)
= ∑ n ∫ D fn (vn) M∏ m=1 pm(vm) dv (A4)
Substituting Eq. A1 into Eq. A3 reduces the M -dimensional integral over independent likelihoods into a 1-dimensional integral and completes the proof.
A.2 MULTIPLICATIVELY SEPARABLE FUNCTIONS
Theorem A.2 (Multiplicative Independent Integration). Given a multiplicitively separable function, g(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [g (v)] = M∏ m=1 ∫ Dvm gm (vm) pm (vm) dvm (A5)
Proof. Since each component of gn is matched to a component in pm, the two products can be recombined to isolate like-dimensions. Each dimension can then be integrated independently.
Ev∼p(v) [g (v)] = ∫ dv ( M∏ n=1 gn(vn) )( M∏ m=1 pm(vm) )
= ∫ · · · ∫ ( M∏
n=1
gn(vn)pn(vn) ) dv1 · · · dvM
= ∫ g1(v1)p1(v1)dv1 · · · ∫ gM (vM )pM (vM )dvM
B ADDITIONAL EXPERIMENTS
All image datasets were trained using Glow as the bijector with the number of levels adjusted for the size of the image. MNIST and Fashion MNIST were trained with Adam for 50 epochs with an exponentially decaying learning rate of γ = 0.75. SVHN and CIFAR10 utilized a similar training procedure but with 75 epochs and a slower exponential decay of γ = 0.99. We find that utilizing an average instead of a sum over separable features and soft-thresholding at −1 improves training stability.
B.1 ADVERSARIAL ROBUSTNESS
We test the efficacy of the local cross-entropy integral by utilizing a synthetic dataset from (Tsipras et al., 2019). The data is constructed via
y ∼ {−1, +1}, x1 = {
+y, w.p. p −y, w.p. 1− p , x2, ..., xM+1 ∼ N (ηy, 1) (B1)
and the adversarial examples are constructed analytically by, essentially, flipping the sign of η. We utilize this synthetic dataset to test adversarial robustness because it provides an upper bound on the adversarial accuracy relative to the standard accuracy which allows us to quantify our performance with respect to an absolute. Additionally, because adversarial examples can be constructed exactly without relying on an optimization procedures, there can be no concerns that this defense relies on obfuscated gradients (Athalye et al., 2018). We refer the interested reader to the original paper for a thorough discussion.
We choose M = 10, p = 0.95, and η = 2/ √ M . For these choices, a strong classifier will obtain near perfect standard accuracy with zero adversarial accuracy while a robust classifier will obtain 95% standard and adversarial accuracy. We use a similar architecture to that found in Sec. C.1 except we utilize ten blocks instead of five. We train one model with only the standard crossentropy and negative log-likelihood losses and a second model with the additional global and local cross-entropy integration loss. We find that the model with only the standard losses achieves reasonably good standard performance and fairly poor adversarial
accuracy. The model with the additional integration losses contains considerably better adversarial performance, nearing the upper and, necessarily lower, standard accuracy bound. Table 3 summarizes our results and the ideal performances.
B.2 TOY SEMI-SUPERVISED REGRESSION
To illustrate the utility of global regularizations, we construct a one-dimensional, semi-supervised problem with x ∼ N (0, 4) and y = tanh(x). We keep all values of x but remove y values corresponding to negative x during training. Unlike the standard semi-
supervised problem, the missing labels are not random but are the result of limitations or bias in the data collection process. We train two models that are identical except that the integrated model includes a penalty over E [Ω (ŷ (x))] based on foreknowledge that the average value is zero. Table 4 shows how the models perform over the supervised and unsupervised regions. The inclusion of the integration regularizer allows the model to reason about regions that it has not directly observed and has decreased the error in that region by two orders of magnitude.
B.3 STANDARD PERFORMANCE
We test the impact of integrable models on OOD performance against several standard image classification datasets: MNIST (Deng, 2012), Fashion MNIST (FMNIST) (Xiao et al., 2017), SVHN (Netzer et al., 2011), and CIFAR10 (Krizhevsky et al., 2009). See Appendix B for architures, distributions, and training parameters. Table 5 contains the validation standard accuracy and bits per dimension for all datasets with all integration regularizers. The upper portion of the table is averaged over 10 runs and contains a reject option in the separable model. The lower portion is a single run without a reject option or any integrated regularizers. We find that the model performs reasonably well for MNIST and
FMNIST but the integrated losses cause a large degredation in performance for SVHN and CIFAR10. The removal of these losses produces similar accuracies but much-improved BPD, consistent with the hybrid network results reported by ResidualFlows (Chen et al., 2019) when Glow is used.
B.4 OUT OF DISTRIBUTION DETECTION AUROC COMPARISONS
The separably model and independent latent dimensions allows us to reason about how the model is making decisions in the latent space. Unfortunately, for most bijectors, it is not possible to carry this interpretability back to the input space. Figure 5 demonstrates the final state of the model trained on Fashion MNIST for several features with respect to the latent distribution, per in-distribution class and juxtaposed with the out of distribution data. Specifically, the top row contains the logit components, fk,m, learned by the separable network, color-coded by class; the middle row contains the distribution of each class (colors matched to logits); and the bottom row contains the distribution of the in-distribution data (blue) and OOD data (red). The top row illustrates how the classification network makes its decisions per feature over the distribution presented in the middle row. We see that the logit components map well to the distribution of the data in each feature and provides some intuition for how certain the classifier is over a feature. This demonstrates how this architecture allows for reasonable interpretibility from the latent space. Any value above zero (the dotted black line) is considered in-distribution and the most likely class is the line with the greatest value at that point. The features were chosen to demonstrate the diversity of the learned solution over features. Generally, the data maps reasonably well to the bimodal distribution, though we do occassionally see mode collapse as in Fig. 5e. Fortunately, in these cases the logits tend to be fairly uninformative and only introduce a small bias. Figures 5a through 5c show a common trend where the OOD data has heavy overlap with one of the two clusters but not the other. While we do see some diversity amongst the in-distribution classes that overlap with the OOD data the “bags” (gray) and “sandals” (cyan) class overlap most often. Finally, Fig. 5d demonstrates a latent feature where the OOD data falls in the region between the two data components.
B.6 MNIST LEAVE-ONE-OUT
Following (Ahmed & Courville, 2020), we test the robustness of our method against semantic OOD examples by training models on all but one class from MNIST (Deng, 2012) and assessing OOD performance on the held out class. We repeat the process for each class and report the AUROC and AUPR. In all cases, we train using ten different random seeds and report the average performance and standard deviation across seeds that do not degenerate against the in-distribution validation set. Table 8 and Table 7 contain the results of our method using both integrated losses compared to several relevant baselines. Results from Efficient GAN-Based Anomaly Detection (EGBAD) (Zenati et al., 2018) and GANomaly (Akcay et al., 2018) are taken by estimating the results from figures in both works. The Semantic Anomalies (Ahmed & Courville, 2020) results are obtained by executing the official version of the code using the rotation auxiliary task with two different batch sizes (128 and 256). The Semantic Anomalies results in both tables are the best across both batch sizes and detection methods (ODIN (Liang et al., 2018) and MSP (Hendrycks & Gimpel, 2017)) based on the AUPR.
We see that the Semantic Anomalies generally achieves the best AUROC across all digits. The integration often achieves the best AUPR but struggles with certain held out digits. In particular, the integration performs significantly worse on the “4” and “9” digits. This is a reasonable confuser due to the similarities and variability of the two classes. These results indicate that the integration penalties are helpful for inducing the model to detect semantic anomalies.
C TRAINING AND ARCHITECTURES
The bijective layer is composed of five blocks where each block contains an affine-coupling layer (Dinh et al., 2017) and an invertible fully connected layer. The data distribution is bimodal over both latent dimensions with means at ±1 and standard deviations of 0.4. The separable network is composed of quadratic functions. We train the model as discussed, using the standard cross-entropy
loss for each observed point, the negative log-likelihood over the input space, and the global crossentropy integration with respect to the contrastive prior. The learning rate is set to 0.001 with standard weight decay of 1e-4 over 20 epochs using Adam (Kingma & Ba, 2015).
C.2 FASHION MNIST
The bijective layers are composed of the Glow (Kingma & Dhariwal, 2018) architecture with two levels composed of 32 steps and 512 channels. We utilize an open-source Glow implementation1 wrapped in Pyro’s (Bingham et al., 2018) bijective operators. The data distribution is bimodal over all latent dimensions with means at±1 and standard deviations of 0.5. The noise constrastive distribution is a standard Gaussian. The separable network is composed of hinge functions (see Sec. 4.3). We utilize a batch size of 256, a learning rate of 1.5e-4 over the bijective layers, and a learning rate of 1.0e-3 over the separable layers using Adam (Kingma & Ba, 2015). Both learning rates are exponentially decayed at a rate of 0.75 per epoch. Standard weight decay is applied with a weight of 1e-4. The network is trained for 50 epochs. The adversarial attacks are performed against the model at the epoch with the lowest validation cross-entropy loss.
D APPROXIMATE EXPECTED CROSS-ENTROPY OVER A DOMAIN
We desire the expected cross-entropy, CE(ŷ(x), yc), over a domain Dc where yc is the desired probability vector over classes, ŷ(x) is the model’s prediction of y ∈ RK from x ∈ RM and is composed of a bijective function, h, a separable function, fk(x) = ∑ m fk,m(xm), and the soft-max function, σ, such that ŷ(x) = σ(f(h(x))). If we are attempting to regularize the model to produce a single class label over Dc, then y will correspond to a one-hot vector. In general, however, yc may be a dense vector with elements yk. The expected cross-entropy is expressed as
EDc [CE(ŷ(x), yc)] = − K∑ k=1 yk EDc [log(ŷk(x))] = − K∑ k=1 yk ∫ Dc log(ŷk(x)) pDc(x|y)dx (D1)
where pDc(x|y) is the probability density function over Dc conditioned on y. We can take advantage of the bijective transform to convert this expression into an expectation over the latent space, z, and latent domain, Zc, which allows us to write
K∑ k=1 yk EDc [log(ŷk(x))] = K∑ k=1 yk EZc [log(σ(f(z))] = K∑ k=1 yk ∫ Zc log(σk(f(z))) pZc(z|y)dz
(D2)
where σk is the kth element after the soft-max normalization and pZc(z|y) is the independent probability density function in the latent space, e.g., pZc(z|y) = ∏ m pm(zm). We can then expand
1https://github.com/chrischute/glow
the soft-max operator within the expectation to get
K∑ k=1 yk EZc [log(σ(f(z))] = −EZc log K∑ j=1 exp(fj(z)) + K∑ k=1 yk EZc [fk(z)] (D3)
The combination of the separable function and independent densities allows for an exact simplification of the second term into
K∑ k=1 yk EZc [fk(z)] = K∑ k=1 yk M∑ m=1 ∫ Zm fk,m(zm)pm(zm)dzm (D4)
≈ 1 G K∑ k=1 yk M∑ m=1 G∑ g=1 fk,m(z̃m,g)
where we have overloaded Zm to correspond to the domain of the mth latent dimension and have approximated the integral via Monte Carlo integration with z̃m,gp̃m(zm) and G draws. The first term on the right-hand side of Eq. D3 does not enjoy the same exact simplification. However, it is possible to bound the term via Jensen’s Inequality by moving the logarithm into the sum over j or outside the expectation operator. We consider the latter case where
EZc log K∑ j=1 exp(fj(z)) ≤ log EZc K∑ j=1 exp(fj(z)) . (D5) Then, we expand the separable function, exchange the order of the sum over dimensions and the exponent, take advantage of the multiplicative integral simplification (Eq. A5), approximate via Monte Carlo integration, and utilize the log ∑ exp operator for stability:
log EZc K∑ j=1 exp(fj(z)) = log EZc K∑ j=1 exp( M∑ m=1 fj,m(zm)) (D6) = log
K∑ j=1 ∫ Zc dz M∏ m=1 exp(fj,m(zm)) M∏ n=1 pn(zn) = log
K∑ j=1 M∏ m=1 ∫ Zm exp(fj,m(zm))pm(zm)dzm ≈ log
K∑ j=1 M∏ m=1 G∑ g=1 exp(fj,m(z̃m,g)) −M log(G) = log
K∑ j=1 exp M∑ m=1 log G∑ g=1 exp(fj,m(z̃m,g))−M log(G).
Finally, we can substitute Eq. D4 and Eq. D6 into Eq. D1 to get
EDc [CE(ŷ(x), yc)] ≤ − K∑ k=1 yk M∑ m=1 ∫ Zm fk,m(zm)pm(zm)dzm (D7)
+ log K∑ j=1 M∏ m=1 ∫ Zm exp(fj,m(zm))pm(zm)dzm ≈ − 1
G K∑ k=1 yk M∑ m=1 G∑ g=1 fk,m(z̃m,g)
+ log K∑ j=1 exp M∑ m=1 log G∑ g=1 exp(fj,m(z̃m,g))−M log(G)
In the event that yc is a one-hot vector (at the cth element) this simplifies to
EDc [CE(ŷ(x), yc)] ≤ − M∑ m=1 ∫ Zm fc,m(zm)pm(zm|c)dzm (D8)
+ log K∑ j=1 M∏ m=1 ∫ Zm exp(fj,m(zm))pm(zm|c)dzm where we reintroduced the condition over the class within the probability densities pm(zm|c) for emphasis. | 1. What is the focus of the paper regarding input-output pair estimation?
2. What is the main idea proposed by the paper to address the problem?
3. What is the reviewer's concern regarding the connection between (*) and (**)?
4. What is the assumption made by the paper regarding model prediction?
5. How does the reviewer feel about the proposed methodology and its potential restriction on neural network classifiers? | Summary Of The Paper
Review | Summary Of The Paper
For input-output pair
(
x
,
y
)
, the paper undertakes the task of estimating (*)
E
x
∼
p
(
x
)
Ω
(
y
^
(
x
)
)
. When
x
is high dimensional, integration is difficult. Standard ML/DL applies MC to estimate this integral based on an iid sample
(
x
i
,
y
i
)
,
i
=
1
,
…
,
n
. This paper suggests there is a better way.
The main idea is to recognize that if
f
is a separable function and
h
is a bijective function, then (**)
E
x
∼
p
(
x
)
f
(
h
(
x
)
)
is easy to evaluate.
Review
The writing renders the connection between (*) and (**) very obscure. After all, commonly-encountered
Ω
∘
y
^
in ML/DL cannot be written as the composition of a separable function and bijective function. In other words, it is difficult to discern the relevance of (**) when the goal is to estimate (*).
Reading Appendix D helped clarify this confusion. The paper operates under the assumption that the model prediction
y
^
can be written as the composition of a bijective
h
, a separable
f
and the soft-max function
σ
, i.e., (***)
y
^
(
x
)
=
σ
(
f
(
h
(
x
)
)
)
. This is a big assumption and should not be relegated to the Appendix. It needs to be front and center in the main text. Even under this assumption the expected cross-entropy in (D1) still cannot be written as (*). The best we can do with this (***) assumption is to derive an upper bound on (*). This is really quite underwhelming. Give me MC error over a potentially very un-tight bound any day.
Besides the exposition, I hold reservations about the proposed methodology. It is not clear that it’s a worthwhile to restrict a neural network classifier to be of the form (***) only to arrive at the bound in Equation D8 (which is same as Equation 13 in the main text). |
ICLR | Title
Practical Integration via Separable Bijective Networks
Abstract
Neural networks have enabled learning over examples that contain thousands of dimensions. However, most of these models are limited to training and evaluating on a finite collection of points and do not consider the hypervolume in which the data resides. Any analysis of the model’s local or global behavior is therefore limited to very expensive or imprecise estimators. We propose to formulate neural networks as a composition of a bijective (flow) network followed by a learnable, separable network. This construction allows for learning (or assessing) over full hypervolumes with precise estimators at tractable computational cost via integration over the input space. We develop the necessary machinery, propose several practical integrals to use during training, and demonstrate their utility.
1 INTRODUCTION
Most supervised learning problems operate by training a model on a finite collection, T , of N (typically paired) examples, (x, y). The model is updated by comparing its predicted output to the expected output and performing some flavor of stochastic gradient descent based on the comparison and various regularizers. The process is repeated by reexamining the elements of T in a random order, possibly with augmentation, until the model parameters converge or an iteration budget is exceeded. This relatively simple procedure has proven to be remarkably effective in a variety of domains and these models have begun to permeate every aspect of modern science and everyday life (He et al., 2016; Silver et al., 2017; Brown et al., 2020).
The deep learning revolution has also resulted in highly effective generative models such as VAEs (Kingma & Welling, 2014), GANs (Goodfellow et al., 2014), and tractable likelihood models (Dinh et al., 2017; Oliva et al., 2018; Grathwohl et al., 2019). These models are largely used to create novel samples of impressive quality. In addition to sampling, likelihood models provide an estimate of the probability density function of the data which can be used for additional, downstream processes.
We augment the training process by constructing neural networks that allow for tractable integration over the input domain. This differs from implicit layers which utilize integration over a parameterized variable (Chen et al., 2018; Grathwohl et al., 2019). Access to fast and differentiable integrals allows us to regularize a model’s average behavior using metrics that may not be available otherwise. Integration over the input space also allows us to supervise how the model behaves in continuous regions that are not directly observed in T and may even be out-of-distribution (OOD). Alternative methods attempt to supervise examples outside of T by performing random perturbations (Gutmann & Hyvärinen, 2010), along the line between known examples (Zhang et al., 2018), or via a generative process (Zenati et al., 2018; Akcay et al., 2018). However, these methods are only capable of observing a small quantity of the total space. By integrating over entire regions, it is possible to observe a large portion of the space based on statistical relevance.
Main Contributions We propose a general architecture that enables tractable integration over the input space, enabling supervision and custom regularization over continuous regions. We demonstrate how to construct this network and how it allows for a reduction in the computation cost required for dense numeric integration from exponential in the number of dimensions to linear. We derive several useful integrated formulations over continuous regions. Finally, we explore the impact of this architecture and regularizers on the standard accuracy and robustness to OOD examples on several standard classification datasets. The code utilized in this paper can be found at https://github.com/lupalab/sep_bij_nets.
Notation Throughout this work, we consider theM dimensional input features, x = [x1, ..., xM ] ∈ RM ; the latent features, z ∈ Z ⊆ RM ; and the K-wise classification probability, y. The input features x the training set, Tx, are drawn from the in-distribution data, D ⊆ RM . Subscripts represent a particular dimension, e.g., Dm corresponds to the mth dimension of the space. Paranthetical superscripts represent the subspace corresponding to a particular class, e.g., D(c) is the subset of D where the data belongs to class c. The bijective network is given as h such that h : D → Z . Probability distributions overD and Z are given by p with the corresponding subscript. Classification networks are given as f and g. Variables with a “hat,” ŷ, are predictions of the true quantity, y.
2 MOTIVATION
Neural networks are highly effective function approximators between two (typically) continuous spaces: f : X → Y . However, networks are typically trained and evaluated using a finite collection of points without any explicit assessment of the complete hypervolume spanned by the data. This omission is understandable from an implementation perspective as the number of samples required to obtain a reasonable estimate over a volume scales exponentially with data dimensionality. However, human beings often have an understanding of how a process should behave on average. Ideally, we would like to embed this intuition into the model but currently cannot assess the average performance of a trained model outside of the held-out test set. Specifically, we would like to regularize the model by estimating the expected behavior of some metric, Ω, produced by the model over the training data
Ex∼p(x) [Ω(ŷ(x))] = ∫ X Ω(ŷ(x))p(x)dx. (1)
There are many useful choices of Ω over a variety of applications. If it is known what the model output should be on average (ȳ), we can construct Ω to encourage that behavior, e.g., Ω(ŷ) = (ȳ− ŷ)2. Minimizing consistency metrics (Xie et al., 2020) are a common method to improve learning in label-starved problems. These encourage the model to produce similar outputs over neighborhoods around (labeled) examples from Tx where neighborhoods are created by random or domain-specific augmentations. This process can be viewed as an approximation to an integral over the neighborhood,
E ∼p( ) [L(y, ŷ(x + ))] = ∫ L(y, ŷ(x + ))p( )d (2)
where L is a distance-like metric, and is the neighborhood. Equation 2 can be generalized to other neighborhoods. We can recast the standard classification problem as a discrete approximation to an integral. Typically, we minimize the cross-entropy between ŷ(x; θ) and y over the model parameters, θ, for all (x, y) ∈ T which becomes an integral over class-conditioned distributions, p(x|c),
min θ − ∑ x,y∈T ∑ k yk log (ŷk(x; θ))⇒ min θ − ∑ k ∫ D(k) yk log (ŷk(x; θ)) p(x|k)dx. (3)
Unfortunately, integration in high dimension is difficult. Naive gridded solutions require an exponential number of points with error decreasing as O(G−M ), for G, M -dimensional points. Monte Carlo (MC) methods theoretically have better performance with error that decreases asO(G−1/2). However, the rate of convergence for MC methods depends on the variance of samples (Veach, 1998), which may make for poor approximations in practice. Importance sampling (Bishop, 2006) can improve the performance of Monte Carlo methods to adapt to the regions with the greatest contribution.
We choose to model the data using a separable function. Separable functions have the key benefit of decomposing M -dimensional integrals into a combination of M one-dimensional integrals. Each
of these one-dimensional integrals can then be solved using any number of highly-accurate solvers (e.g., Runge-Kutta (Shampine, 2005), Dormand-Prince (Dormand & Prince, 1980), Tsit (Tsitouras, 2011), etc.) that have error rates better than O(G−4) but are unavailable in high dimensions. The use of separable functions is a component of the VEGAS algorithm (Peter Lepage, 1978) and is utilized in conjunction with adaptive Monte Carlo sampling to approximate high-dimensional integrals.
The use of a separable function over the input space may make estimation of integrals over the model more accessible; however, they impose a strong, inappropriate inductive-bias. The obvious approach of utilizing a standard neural network as a feature extractor and integrating over learned features means that we would no longer have a valid integrator over the input space. We propose to solve this problem by utilizing bijective transforms prior to the separable network. The bijective transform allows us to decouple the data into a latent space where the data can be modeled using a separable network and guarantees equality between integrals in the latent space and integrals in the input space.
3 BACKGROUND
We perform integration over the input space by splitting neural network models down into two key components: (1) a bijective feature extractor, (2) a separable task network, see Fig. 1. For simplicity, we only consider classification tasks in this work. This makes our total network analogous with the common architecture where a classifier, often a linear layer or an MLP, is constructed on a feature extractor, such as a CNN. Unlike the typical process, we must place constraints on both networks so that we can integrate over the input domain. This network breakdown is similar to hybrid networks (Chen et al., 2019; Nalisnick et al., 2019b) except for the separability requirement on the classifier.
3.1 BIJECTIVE NETWORKS
Bijective networks are the key component in flow-based likelihood models. A bijective network, h : D → Z , has a known forward and inverse operation so that data can exactly be reconstructed after the transformation. This allows for exact likelihood estimation via the change of variables formula:
z = h(x; θ), x = h−1(z; θ), pX(x) = pZ(h(x; θ)) ∣∣∣∣∂h∂x ∣∣∣∣ (4)
where pZ is a predefined distribution over the latent space, often a standard Gaussian.
These models are trained via Eq. 4 to maximize the likelihood of the examples in T . Once trained, flow-based likelihood models are commonly used as a generative process where samples are drawn from pZ and are then inverted through h to arrive at an example in D. Instead, we will take advantage of the fact that we can choose pZ and then utilize it in downstream tasks. Normalizing flow bijectors provide rich feature extractors that can represent a distribution of complicated inputs with simplydistributed, independent features, z. Given the independence and expressibility of these learned independent features, we build estimators using separable functions over z, which enables us to integrate over the data’s domain while retaining expressibility.
The requirement for bijectivity places a strong constraint on network design, eliminating many common choices due to the need to maintain dimension or invert element-wise activations. Even naive convolutional operations become unavailable since they are not generally invertible. Modern advances have demonstrated methods to work around these limitations through the use of clever partitioning and coupling tricks Dinh et al. (2017) or the use of constraints Chen et al. (2019). However, the field of learnable bijective functions is less advanced then its unconstrained counterparts which results in reduced performance on auxiliary tasks. We utilize Glow Kingma & Dhariwal (2018) to process image data and continuous normalizing flows Grathwohl et al. (2019) for tabular data.
3.2 SEPARABLE FUNCTIONS
Separable functions have long been used in mathematics and physics to solve simple partial differential equations such as the homogeneous wave and diffusion equations (Strauss, 2007). We consider two types of separable functions, additive and multiplicative. All proofs can be found in Appendix A.
3.2.1 ADDITIVELY SEPARABLE FUNCTIONS
Definition 3.1. A function, f : CM → CK , is additively separable if it is composed as a summation of element-wise functions operating independently on each dimension of the input:
f (v) = M∑ m=1 fm (vm;φm) (5)
Theorem 3.1 (Additive Independent Integration). Given an additively separable function, f(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [f (v)] = M∑ m=1 ∫ Dvm fm (vm) pm (vm) dvm (6)
3.2.2 MULTIPLICATIVELY SEPARABLE FUNCTIONS
Definition 3.2. A function, g : CM → CK , is multiplicatively separable if it is composed as a product of element-wise functions operating independently on each dimension of the input:
g (v) = M∏ m=1 gm (vm;ψm) (7)
Theorem 3.2 (Multiplicative Independent Integration). Given a multiplicitively separable function, g(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [g (v)] = M∏ m=1 ∫ Dvm gm (vm) pm (vm) dvm (8)
4 METHOD
Both forms of separable functions allow us to decompose a single M -dimensional integral into M 1-dimensional integrals. A dense estimation of the integral without taking advantage of a separable function usingG points per dimension would requireO(GM ) network evaluations. This is completely impractical for modern datasets where M is at least on the order of hundreds and G should be as large as possible. Exploiting separable functions allow us to reduce the complexity to O(GM). However, it is unlikely that we could construct a separable function directly on the input space and achieve reasonable performance. Doing so would essentially require that each input dimension contribute to the final output independently of the others. Instead, we can combine Eq. 4 and Eq. 6 (alternatively, Eq. 8) to perform practical integration over the input domain while still allowing for inter-dependent contributions from each input dimension. To do so, we first let the latent distribution, pZ , be independently (but not necessarily identically) distributed: pZ(z) = ∏ m pm(zm). This allows us to write the integral over the input space in terms of the latent space and then simplify via Eq. 6.∫ D f(h(x))pZ(h(x)) ∣∣∣∣∂h∂x ∣∣∣∣ dx = Ex∼pX(x) [f(h(x))]
= Ez∼pZ(z) [f(z)] = ∫ Z pZ(z)f(z)dz = M∑ m=1 ∫ Zm fm(zm) pm (zm) dzm (9)
Each 1-dimensional integral can be easily approximated using a variety of integration approaches. In addition to the requirements placed on the feature extractor and task network, this formulation also
requires that we define the integration domain in the latent space as a Cartesian product of domains over each latent dimension. This may seem like a strong requirement since we apparently lose some level of interpretability; however, there are several advantages to defining the input domain in this learned space. Most notably, we know the data distribution in this space exactly and can tune the domain based on the goal of a particular integral.
Figure 2 contains a cartoon example that demonstrates how these methods combine to allow for integration over the complex data space. Both plots show the value of f ; Fig. 2a shows the nonseparable function in the input space and Fig. 2b shows the same data in the (separable) latent space, after the bijective transformation. The colored points and dotted lines are in correspondence between the two spaces and illustrate how the space is warped by the bijector to create a separable, independent latent space. The light gray lines represent the contours of the underlying probability distribution. We define both the integration domain and perform the integration in the latent space. For simplicity, we illustrate the integration using five, equally-spaced points (in both dimensions). The naive procedure would require sampling f at each point in the grid (the blue points). The use of the separable functions allow us to integrate using only the red points and get the same estimate.
5 PRACTICAL APPLICATIONS
In this section we discuss specific choices made when constructing the latent distribution, separable networks, and various integrals. For classification tasks, it is not possible to parameterize the normalized network output of class probabilities, ŷ, as a separable network since each prediction must sum to one. However, it is possible to parameterize each unnormalized logits as a separable network, e.g., ŷ(x) = σ (f(h(x))), where σ is the softmax operator and the output of f are the unnormalized logits. While this limits what quantities can be integrated over exactly, it is still possible to integrate over approximations/bounds without resorting to brute-force methods.
Out-of-Distribution Detection We construct a global integration regularizer to provide resilience to out-of-distribution (OOD) examples. We enable OOD detection by introducing a “reject” option (Hendrickx et al., 2021) into the classification vector, increasing the number of classes by one, where the additional class indicates that the instance is OOD. An example is considered OOD if ŷK+1 exceeds a predefined threshold. We supervise the model over out-of-distribution examples by integrating the cross-entropy over the contrastive probability distribution, q(z) (see Sec. 5.3.1).
Semi-supervised Learning We build a local integral to enhance consistency within latent neighborhoods and utilize it in place of pre-chosen augmentation strategies to inform a label-starved dataset. Specifically, we introduce a loss where all examples near a real, labeled example are regularized to share the real example’s label as in Eq. 2 (see Sec. 5.3.2). We additionally use pseudo-labels Lee (2013) so that consistency is maintained about unlabelled points as the model grows more confident.
5.1 LATENT DISTRIBUTIONS
A critical component of this method is that the likelihoods over the latent space must be independent: pZ(z) = ∏ m pZ(zm). As this is extremely unlikely to occur in the input space, we learn a bijective (flow) transformation from the input space to a latent space where the likelihood can be decomposed into independent components of our choice. Since we would like to use the latent features to discriminate between classes using a separable function, we choose to utilize a (bimodal) Gaussian mixture model. This allows the model to put
different classes into one of two “buckets” per feature and enables classification through multiple features. This results in an exponential number of components (with respect to dimension) in the full latent space, e.g., 2M components. However, we can easily offset this by choosing some dimensions to use a unimodal latent distribution while the remaining dimensions are still bimodal:
pZm(zm) =
{ 0.5 N (−µ, σ21) + 0.5 N (µ, σ21), if m ≤ L
N (0, σ22), else (10)
In addition to the typical latent data distribution, pZ(z), we explicitly include a contrastive distribution, q(z). This distribution is critical if we desire to regularize our model outside the data distribution, i.e., setting some background/default behavior. When using a bimodal data distribution, we additionally desire that latent vectors are more likely under this distribution once the vector is sufficiently far away from any single in-distribution mode. Therefore, we select this distribution so that it fully encapsulates all the components of the data distribution, e.g., qm(zm) > pZm(zm) for | |zm|−µ| > γ. When the data distribution is unimodal we choose pm = qm so that the feature is uninformative.
We utilize a standard Gaussian for the contrastive distribution and set µ to 1 and experiment with σ1 ≤ 0.5. This choice allows us to formulate OOD examples that exist between the in-distribution data as near zero, and outside the in-distribution data, near the tails. Figure 3 illustrates the data and contrastive distributions for a latent feature for convenience.
5.2 SEPARABLE NETWORKS
We explore three different formulations of separable networks. The first formulation learns a distinct quadratic function, the second learns a symmetric hinge, and the third learns a multi-layer perceptron. These functions have distinct parameterizations for each latent feature and each in-distribution class. The out-of-distribution logit is held fixed at zero: lK+1(x) = 0.
f (quad)k,m (zm;αk,m, uk,m, νk,m) = αk,m − (zm − uk,m)2
2e2νk,m (11)
f (hinge)k,m (zm;αk,m, uk,m, νk,m) = αk,m − |zm − uk,m| e νk,m (12)
where k is the logit class index and m is the feature dimension so that the total unnormalized logit, lk(z), is lk(z) = ∑ m fk,m(zm). The separable MLP is constructed by concatenating a learned vector per feature and class to each 1-dimensional feature and constructing a standard MLP that operates on that augmented state. This formulation provides the greatest flexibility as it leverages all the benefits of the universal approximator theorem (Hornik, 1991). Unfortunately, we find the MLP version difficult to train in practice, often resulting in degenerate solutions. Fixing the OOD logit at zero provides a fixed reference point for the other logits. We interpret this as each latent feature voting on how in-distribution the example is on a per-class basis.
5.3 INTEGRALS
The cross-entropy, CE(ŷ(x), y), between the normalized model output, ŷ(x), and the true labels, y, is commonly used to train classification models. In this section, we explore global and local integrals of the cross-entropy loss used to train most classifiers (see Eq. 3) when ŷ is given as the composition of the softmax function, a separable function and a bijective function, e.g., σ(f(h(x))). We can express this as minθ ∑ x,y∈T CE(ŷ(x), y) owing to the finite number of elements in T . Ideally, we would minimize the model cross-entropy over all possible examples in D. We accomplish this by defining the subspace, D(c) ⊂ D, that has a given label and corresponding conditional distribution, p(x|c)
Ex∼p(x|c) [CE (ŷ(x), c)] = −Ex∼p(x|c) [log(ŷc(x))] = − ∫ D(c) log(ŷc(x)) p(x|c)dx. (13) It is not possible to represent ŷ(x) as separable network due to the softmax operation but it is possible to parameterize the unnormalized logits as a separable network. Substituting this parameterization into Eq. 13 and transforming to the latent space yields a bound for the expected cross-entropy
Ex∼p(x|c) [CE (ŷ(x), y)] ≤ − M∑ m=1 ∫ Z(c) fc,m(zm)pm(zm|c)dzm
+ log K+1∑ j=1 M∏ n=1 ∫ Z(c) exp (fj,n(zn)) pn(zn|c)dzn . (14)
See Appendix E for a full derivation. We will utilize this formulation in conjunction with a contrastive prior to supervise OOD examples, Sec. 5.3.1, and with a local prior to enforce consistency, Sec. 5.3.2.
5.3.1 OUT-OF-DISTRIBUTION SUPERVISION
We can utilize the cross-entropy integral in different ways by choosing the label we would like to apply over a domain. For example, we can integrate the cross-entropy over the OOD latent space, U , with the true label fixed to the reject class (c=K+1), and the latent distribution over codes is the contrastive distribution (Sec. 5.1), p(z|c=K+1) = q(z); using Eq. 14 with fK+1,n=0:
LGLBL = log K∑ j=1 M∏ n=1 ∫ U exp (fj,n(zn)) qn(zn)dzn . (15) In effect, Eq. 15 discourages OOD data from being labeled as any in-distribution label. Since the data and contrastive distributions overlap, the model could degenerate and always make OOD decisions; however, this would be in conflict with the standard cross-entropy loss applied to each example in the training set. So long as LGLBL is not weighted too strongly, the model achieves equilibrium by making OOD predictions over regions where the contrastive distribution is more likely. In effect, we supervise OOD training without requiring any OOD data.
5.3.2 LOCAL CONSISTENCY
We may also perform integration over neighborhoods of data points in Tx and utilize that point’s label over the entire region. This integral serves to improve consistency around observations. Adversarial attacks (Goodfellow et al., 2015; Madry et al., 2018) are often constructed as perturbations on real data points and adversarial training finds one such point and includes it in the training set. We can interpret local consistency integrations as a form of average adversarial training. We reformulate the integral to be centered around a particular datum, x0 and its class label, y
LLCL = − ∑ k yk ∫ V log(σ (fk(h(x0) + v))) pV (v)dv (16)
A complication resulting from local integration is how to select the local distribution, pV (v), and the neighborhood, V . There are many reasonable choices depending on the goal of the local integration. For simplicity, we choose each Vm ∈ [−ε, ε] and pm(vm) to be uniformly distributed.
5.4 LOSS COMPONENTS
The final training loss used to evaluate the model is composed of different combinations of the standard cross-entropy loss over class predictions, LCE = − ∑ k yk log (ŷ), the negative log-likelihood loss
over in-distribution data points, LNLL = − log(pZ(h(x))) − log ∣∣∂h ∂x ∣∣, and the integration losses, LGLBL and LLCL. We always include LCE and LNLL. Based on results from previous hybrid networks (Chen et al., 2019; Nalisnick et al., 2019b), we weight the negative log-likelihood by 1/M , which is analogous to using bits per dimension and introduce an additional weight over the cross-entropy, λ, which controls the relative importance of the generative and classification tasks. When included, each integration penalty shares the same weight as LCE with additional binary weight, π
Ltotal = 1
M LNLL + λ (LCE + πGLBLLGLBL + πLCLLLCL) . (17)
6 RELATED WORK
Noise Contrastive Distributions Noise contrastive methods introduce a distribution that is distinct from the data distribution. The constrastive distribution can be learned and provides a mechanism to supervise the model to discriminate between true and contrastive samples (Gutmann & Hyvärinen, 2010). This forces the model to learn discriminative statistical properties of the data. We utilize this idea to inform the null-space over which we will integrate. We fix the data and contrastive distributions and learn the bijective map between input and latent spaces.
Hybrid Models Normalizing flows are a family of flexible, tractable likelihood estimators based on the application of learnable, bijective functions Dinh et al. (2017); Grathwohl et al. (2019). These methods have shown strong performance, both as likelihood estimators and as generative processes. Recent work has coupled advances in bijective networks to construct invertible feature extractors that are capped by a more conventional classifier. The combination bijective/classifier structure are called hybrid models and show reasonable performance as likelihood and discriminative models Chen et al. (2019); Nalisnick et al. (2019b). Unfortunately, the discriminative performance of these models is lower than traditional methods. We utilize hybrid models with constraints that enable tractable integration. Other hybrid models architectures are less restricted but prohibits practical integration.
Out of Distribution Detection OOD detection is a challenge for many modern ML algorithms. Recent work has demonstrated that OOD data can achieve higher likelihoods than the training data Hendrycks et al. (2019); Nalisnick et al. (2019a). Several recent methods have been developed to detect OOD examples including Maximum Softmax Probability (MSP) Hendrycks & Gimpel (2017), Outlier Exposure (OE) Hendrycks et al. (2019), Multiscale Score Matching (MSMA) Mahmood et al. (2021), ODIN Liang et al. (2018), and Certified Certain Uncertainty (CCU) Meinke & Hein (2020). Lee et al. (2018) utilize a GAN trained with a classifier to produce examples near but outside the training distribution. These methods are constructed specifically for OOD detection whereas our method is applicable to a variety of problems.
Semi-supervised Learning Semi-supervised learning is a burgeoning research area that learns from a large data corpus when only a small subset are labeled. The problem setting is very pertinent as it is often easy to acquire data examples but can be extremely time consuming to create the corresponding labels. Many modern methods can achieve very strong performance with very few labels Zhang et al. (2018); Chen et al. (2020). However, most of these methods rely on domainspecific augmentation strategies that are difficult to replicate in new data regimes, i.e., for non-image data. Fortunately, methods such as SSVAE Kingma et al. (2014) and VIME Yoon et al. (2020) are domain agnostic. These methods are built exclusively for semi-supervised learning but we only require an additional regularizing penalty to achieve comparable performance on tabular datasets.
7 EXPERIMENTS
All models were constructed using PyTorch (Paszke et al., 2019), trained using PyTorchLightning (Falcon, 2019), utilized bijectors and distributions from Pyro (Bingham et al., 2018), and were trained using Adam (Kingma & Ba, 2015). We assess the integrable model’s performance in a semi-supervised regime and against OOD examples. See Appendix B and C for additional experiments and training details.
7.1 SPIRALS
We construct a synthetic, 2D dataset composed of three intertwined spirals, see Fig. 4a. Suppose that, due to an unknown sampling bias, we only observe two of the three arms (missing the green arm) and constructed our model as a two-class problem with a third, reject class.
We sample points in a dense 2-dimensional grid at twice the range of the data in both dimensions and evaluate the probability that each point belongs to the two known classes and the OOD class. Figures 4d and 4e illustrate the probability of belonging to the two known classes and Fig. 4f contains the probability that each point is OOD. Red and white values indicate high and low probabilities, respectively. The model successfully identifies the two in-distribution regions. Unsurprisingly, the model also identifies data outside of the training data range as being OOD and, impressively, identifies the unobserved spiral and the region between spirals as being OOD.
Finally, we sample a square box (orange) around a data point (blue) in the latent space and invert the box to assess the appropriateness of the neighborhood in the input space and plot the result, Figs. 4b and 4c. As expected, the bijector converts the latent Euclidean neighborhood into a semantically meaningful neighborhood. Had we included the local cross-entropy integral in this training process, all points within the orange boundary would have received the same label as the data point.
7.2 OUT OF DISTRIBUTION DETECTION
We test how our models respond to OOD examples when trained with global integrals over the noisecontrastive prior and consistency integrals and compare to methods designed cfor OOD detection. See Appendix B.3 for standard performance and Sec. 6 for a discussion of the baselines. Table 1 contains the AUPR for the various methods against similar but different datasets (see Appendix B.4 for the AUROC). We juxtapose SVHN vs CIFAR10 and MNIST vs FMNIST plus Extended MNIST (EMNIST) (Cohen et al., 2017). We include a separable, hybrid model without integral regularizations (hybrid) and the same model with regularizations (Int. Reg.) to illustrate the utility of learning over hypervolumes. When MNIST or FMNIST are the in-distribution set, the regularized, integrable network performs on par with the best baseline methods. SVHN and CIFAR10 show reasonable OOD performance but are not as strong as the baselines. This is not surprising since the integrals rely on a reasonable estimate of p(x), which both datasets have failed to achieve, Table 5.
7.3 SEMI-SUPERVISED LEARNING
We apply the local integral to the separable hybrid model and train on several tabular datasets with 10% of the labels and domain-specific augmentation strategies are unavailable. These models do not utilize the OOD class or global penalty. We apply pseudo-labelling with thresholds of 0.9-0.95. Table 2 compares the perfor-
mance of the integrated hybrid models to several standard (non-image) semi-supervised baselines on flat MNIST (MNIST, flattened to a vector), MiniBooNE Bazarko (2001), and HepMass Baldi et al. (2016). We utilize a CNF Grathwohl et al. (2019) as the bijector and the hinge as the classifier. We see that the integrated model achieves similar-to-better performance than the other methods on all datasets, showcasing the generality of integrated regularizers in different problem spaces.
8 CONCLUSIONS
In this work, we develop architectures that enable tractable integration over the data space. We demonstrate that the ability to supervise regions and not isolated points encourages the model to learn better representations and be less likely to degenerate. We consider several formulations that allow us to regularize the model’s behavior based on consistency and contrast without relying on augmentations of and sparse comparisons between a finite collection of data points. We experiment with the various integrals to obtain promising out-of-distribution detection. Through now tractable integrals, this work enables future methods and applications for learning over continuous regions.
ACKNOWLEDGMENTS
This research was partly funded by grants NSF IIS2133595 and NSF 2113345 and by NIH 1R01AA02687901A1.
A PROOFS
A.1 ADDITIVELY SEPARABLE FUNCTIONS
Theorem A.1 (Additive Independent Integration). Given an additively separable function, f(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [f (v)] = M∑ m=1 ∫ Dvm fm (vm) pm (vm) dvm (A1)
Proof. ∫ fn (vn) pm (vm) dvm = { fn (vn) , if n 6= m∫ fn (vn) pn (vn) dvn, else
(A2)
Ev∼p(v) [f (v)] = ∫ D ( M∑ n=1 fn (vn) )( M∏ m=1 pm(vm) ) dv (A3)
= ∑ n ∫ D fn (vn) M∏ m=1 pm(vm) dv (A4)
Substituting Eq. A1 into Eq. A3 reduces the M -dimensional integral over independent likelihoods into a 1-dimensional integral and completes the proof.
A.2 MULTIPLICATIVELY SEPARABLE FUNCTIONS
Theorem A.2 (Multiplicative Independent Integration). Given a multiplicitively separable function, g(v), an independent likelihood, p(v) = ∏M m=1 pm(vm), and domain, Dv = Dv1 × ...×DvM :
Ev∼p(v) [g (v)] = M∏ m=1 ∫ Dvm gm (vm) pm (vm) dvm (A5)
Proof. Since each component of gn is matched to a component in pm, the two products can be recombined to isolate like-dimensions. Each dimension can then be integrated independently.
Ev∼p(v) [g (v)] = ∫ dv ( M∏ n=1 gn(vn) )( M∏ m=1 pm(vm) )
= ∫ · · · ∫ ( M∏
n=1
gn(vn)pn(vn) ) dv1 · · · dvM
= ∫ g1(v1)p1(v1)dv1 · · · ∫ gM (vM )pM (vM )dvM
B ADDITIONAL EXPERIMENTS
All image datasets were trained using Glow as the bijector with the number of levels adjusted for the size of the image. MNIST and Fashion MNIST were trained with Adam for 50 epochs with an exponentially decaying learning rate of γ = 0.75. SVHN and CIFAR10 utilized a similar training procedure but with 75 epochs and a slower exponential decay of γ = 0.99. We find that utilizing an average instead of a sum over separable features and soft-thresholding at −1 improves training stability.
B.1 ADVERSARIAL ROBUSTNESS
We test the efficacy of the local cross-entropy integral by utilizing a synthetic dataset from (Tsipras et al., 2019). The data is constructed via
y ∼ {−1, +1}, x1 = {
+y, w.p. p −y, w.p. 1− p , x2, ..., xM+1 ∼ N (ηy, 1) (B1)
and the adversarial examples are constructed analytically by, essentially, flipping the sign of η. We utilize this synthetic dataset to test adversarial robustness because it provides an upper bound on the adversarial accuracy relative to the standard accuracy which allows us to quantify our performance with respect to an absolute. Additionally, because adversarial examples can be constructed exactly without relying on an optimization procedures, there can be no concerns that this defense relies on obfuscated gradients (Athalye et al., 2018). We refer the interested reader to the original paper for a thorough discussion.
We choose M = 10, p = 0.95, and η = 2/ √ M . For these choices, a strong classifier will obtain near perfect standard accuracy with zero adversarial accuracy while a robust classifier will obtain 95% standard and adversarial accuracy. We use a similar architecture to that found in Sec. C.1 except we utilize ten blocks instead of five. We train one model with only the standard crossentropy and negative log-likelihood losses and a second model with the additional global and local cross-entropy integration loss. We find that the model with only the standard losses achieves reasonably good standard performance and fairly poor adversarial
accuracy. The model with the additional integration losses contains considerably better adversarial performance, nearing the upper and, necessarily lower, standard accuracy bound. Table 3 summarizes our results and the ideal performances.
B.2 TOY SEMI-SUPERVISED REGRESSION
To illustrate the utility of global regularizations, we construct a one-dimensional, semi-supervised problem with x ∼ N (0, 4) and y = tanh(x). We keep all values of x but remove y values corresponding to negative x during training. Unlike the standard semi-
supervised problem, the missing labels are not random but are the result of limitations or bias in the data collection process. We train two models that are identical except that the integrated model includes a penalty over E [Ω (ŷ (x))] based on foreknowledge that the average value is zero. Table 4 shows how the models perform over the supervised and unsupervised regions. The inclusion of the integration regularizer allows the model to reason about regions that it has not directly observed and has decreased the error in that region by two orders of magnitude.
B.3 STANDARD PERFORMANCE
We test the impact of integrable models on OOD performance against several standard image classification datasets: MNIST (Deng, 2012), Fashion MNIST (FMNIST) (Xiao et al., 2017), SVHN (Netzer et al., 2011), and CIFAR10 (Krizhevsky et al., 2009). See Appendix B for architures, distributions, and training parameters. Table 5 contains the validation standard accuracy and bits per dimension for all datasets with all integration regularizers. The upper portion of the table is averaged over 10 runs and contains a reject option in the separable model. The lower portion is a single run without a reject option or any integrated regularizers. We find that the model performs reasonably well for MNIST and
FMNIST but the integrated losses cause a large degredation in performance for SVHN and CIFAR10. The removal of these losses produces similar accuracies but much-improved BPD, consistent with the hybrid network results reported by ResidualFlows (Chen et al., 2019) when Glow is used.
B.4 OUT OF DISTRIBUTION DETECTION AUROC COMPARISONS
The separably model and independent latent dimensions allows us to reason about how the model is making decisions in the latent space. Unfortunately, for most bijectors, it is not possible to carry this interpretability back to the input space. Figure 5 demonstrates the final state of the model trained on Fashion MNIST for several features with respect to the latent distribution, per in-distribution class and juxtaposed with the out of distribution data. Specifically, the top row contains the logit components, fk,m, learned by the separable network, color-coded by class; the middle row contains the distribution of each class (colors matched to logits); and the bottom row contains the distribution of the in-distribution data (blue) and OOD data (red). The top row illustrates how the classification network makes its decisions per feature over the distribution presented in the middle row. We see that the logit components map well to the distribution of the data in each feature and provides some intuition for how certain the classifier is over a feature. This demonstrates how this architecture allows for reasonable interpretibility from the latent space. Any value above zero (the dotted black line) is considered in-distribution and the most likely class is the line with the greatest value at that point. The features were chosen to demonstrate the diversity of the learned solution over features. Generally, the data maps reasonably well to the bimodal distribution, though we do occassionally see mode collapse as in Fig. 5e. Fortunately, in these cases the logits tend to be fairly uninformative and only introduce a small bias. Figures 5a through 5c show a common trend where the OOD data has heavy overlap with one of the two clusters but not the other. While we do see some diversity amongst the in-distribution classes that overlap with the OOD data the “bags” (gray) and “sandals” (cyan) class overlap most often. Finally, Fig. 5d demonstrates a latent feature where the OOD data falls in the region between the two data components.
B.6 MNIST LEAVE-ONE-OUT
Following (Ahmed & Courville, 2020), we test the robustness of our method against semantic OOD examples by training models on all but one class from MNIST (Deng, 2012) and assessing OOD performance on the held out class. We repeat the process for each class and report the AUROC and AUPR. In all cases, we train using ten different random seeds and report the average performance and standard deviation across seeds that do not degenerate against the in-distribution validation set. Table 8 and Table 7 contain the results of our method using both integrated losses compared to several relevant baselines. Results from Efficient GAN-Based Anomaly Detection (EGBAD) (Zenati et al., 2018) and GANomaly (Akcay et al., 2018) are taken by estimating the results from figures in both works. The Semantic Anomalies (Ahmed & Courville, 2020) results are obtained by executing the official version of the code using the rotation auxiliary task with two different batch sizes (128 and 256). The Semantic Anomalies results in both tables are the best across both batch sizes and detection methods (ODIN (Liang et al., 2018) and MSP (Hendrycks & Gimpel, 2017)) based on the AUPR.
We see that the Semantic Anomalies generally achieves the best AUROC across all digits. The integration often achieves the best AUPR but struggles with certain held out digits. In particular, the integration performs significantly worse on the “4” and “9” digits. This is a reasonable confuser due to the similarities and variability of the two classes. These results indicate that the integration penalties are helpful for inducing the model to detect semantic anomalies.
C TRAINING AND ARCHITECTURES
The bijective layer is composed of five blocks where each block contains an affine-coupling layer (Dinh et al., 2017) and an invertible fully connected layer. The data distribution is bimodal over both latent dimensions with means at ±1 and standard deviations of 0.4. The separable network is composed of quadratic functions. We train the model as discussed, using the standard cross-entropy
loss for each observed point, the negative log-likelihood over the input space, and the global crossentropy integration with respect to the contrastive prior. The learning rate is set to 0.001 with standard weight decay of 1e-4 over 20 epochs using Adam (Kingma & Ba, 2015).
C.2 FASHION MNIST
The bijective layers are composed of the Glow (Kingma & Dhariwal, 2018) architecture with two levels composed of 32 steps and 512 channels. We utilize an open-source Glow implementation1 wrapped in Pyro’s (Bingham et al., 2018) bijective operators. The data distribution is bimodal over all latent dimensions with means at±1 and standard deviations of 0.5. The noise constrastive distribution is a standard Gaussian. The separable network is composed of hinge functions (see Sec. 4.3). We utilize a batch size of 256, a learning rate of 1.5e-4 over the bijective layers, and a learning rate of 1.0e-3 over the separable layers using Adam (Kingma & Ba, 2015). Both learning rates are exponentially decayed at a rate of 0.75 per epoch. Standard weight decay is applied with a weight of 1e-4. The network is trained for 50 epochs. The adversarial attacks are performed against the model at the epoch with the lowest validation cross-entropy loss.
D APPROXIMATE EXPECTED CROSS-ENTROPY OVER A DOMAIN
We desire the expected cross-entropy, CE(ŷ(x), yc), over a domain Dc where yc is the desired probability vector over classes, ŷ(x) is the model’s prediction of y ∈ RK from x ∈ RM and is composed of a bijective function, h, a separable function, fk(x) = ∑ m fk,m(xm), and the soft-max function, σ, such that ŷ(x) = σ(f(h(x))). If we are attempting to regularize the model to produce a single class label over Dc, then y will correspond to a one-hot vector. In general, however, yc may be a dense vector with elements yk. The expected cross-entropy is expressed as
EDc [CE(ŷ(x), yc)] = − K∑ k=1 yk EDc [log(ŷk(x))] = − K∑ k=1 yk ∫ Dc log(ŷk(x)) pDc(x|y)dx (D1)
where pDc(x|y) is the probability density function over Dc conditioned on y. We can take advantage of the bijective transform to convert this expression into an expectation over the latent space, z, and latent domain, Zc, which allows us to write
K∑ k=1 yk EDc [log(ŷk(x))] = K∑ k=1 yk EZc [log(σ(f(z))] = K∑ k=1 yk ∫ Zc log(σk(f(z))) pZc(z|y)dz
(D2)
where σk is the kth element after the soft-max normalization and pZc(z|y) is the independent probability density function in the latent space, e.g., pZc(z|y) = ∏ m pm(zm). We can then expand
1https://github.com/chrischute/glow
the soft-max operator within the expectation to get
K∑ k=1 yk EZc [log(σ(f(z))] = −EZc log K∑ j=1 exp(fj(z)) + K∑ k=1 yk EZc [fk(z)] (D3)
The combination of the separable function and independent densities allows for an exact simplification of the second term into
K∑ k=1 yk EZc [fk(z)] = K∑ k=1 yk M∑ m=1 ∫ Zm fk,m(zm)pm(zm)dzm (D4)
≈ 1 G K∑ k=1 yk M∑ m=1 G∑ g=1 fk,m(z̃m,g)
where we have overloaded Zm to correspond to the domain of the mth latent dimension and have approximated the integral via Monte Carlo integration with z̃m,gp̃m(zm) and G draws. The first term on the right-hand side of Eq. D3 does not enjoy the same exact simplification. However, it is possible to bound the term via Jensen’s Inequality by moving the logarithm into the sum over j or outside the expectation operator. We consider the latter case where
EZc log K∑ j=1 exp(fj(z)) ≤ log EZc K∑ j=1 exp(fj(z)) . (D5) Then, we expand the separable function, exchange the order of the sum over dimensions and the exponent, take advantage of the multiplicative integral simplification (Eq. A5), approximate via Monte Carlo integration, and utilize the log ∑ exp operator for stability:
log EZc K∑ j=1 exp(fj(z)) = log EZc K∑ j=1 exp( M∑ m=1 fj,m(zm)) (D6) = log
K∑ j=1 ∫ Zc dz M∏ m=1 exp(fj,m(zm)) M∏ n=1 pn(zn) = log
K∑ j=1 M∏ m=1 ∫ Zm exp(fj,m(zm))pm(zm)dzm ≈ log
K∑ j=1 M∏ m=1 G∑ g=1 exp(fj,m(z̃m,g)) −M log(G) = log
K∑ j=1 exp M∑ m=1 log G∑ g=1 exp(fj,m(z̃m,g))−M log(G).
Finally, we can substitute Eq. D4 and Eq. D6 into Eq. D1 to get
EDc [CE(ŷ(x), yc)] ≤ − K∑ k=1 yk M∑ m=1 ∫ Zm fk,m(zm)pm(zm)dzm (D7)
+ log K∑ j=1 M∏ m=1 ∫ Zm exp(fj,m(zm))pm(zm)dzm ≈ − 1
G K∑ k=1 yk M∑ m=1 G∑ g=1 fk,m(z̃m,g)
+ log K∑ j=1 exp M∑ m=1 log G∑ g=1 exp(fj,m(z̃m,g))−M log(G)
In the event that yc is a one-hot vector (at the cth element) this simplifies to
EDc [CE(ŷ(x), yc)] ≤ − M∑ m=1 ∫ Zm fc,m(zm)pm(zm|c)dzm (D8)
+ log K∑ j=1 M∏ m=1 ∫ Zm exp(fj,m(zm))pm(zm|c)dzm where we reintroduced the condition over the class within the probability densities pm(zm|c) for emphasis. | 1. What is the focus and contribution of the paper regarding computational tractability?
2. What are the strengths of the proposed approach, particularly in terms of novelty and principled strategy?
3. What are the weaknesses of the paper, especially regarding its claims and experimental results?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
5. Are there any concerns or suggestions regarding the scope of application and versatility of the proposed method? | Summary Of The Paper
Review | Summary Of The Paper
In their submission, the authors proposed a computationally tractable way to perform integration over a high-dimensional region that the data samples reside. This integration technique is applied to OOD detection tasks.
Review
The paper is clearly written and well-motivated. I think that integration over a continuous region -- instead of the more heuristic approach of random perturbation or augmentation -- is a novel and more principled strategy that enhances the robustness of models. For the purpose of tractable integration, using neural networks to parametrize separable functions is a natural approach.
My main concern, however, is the scope of application of this work. Based on Section 1-6 of the paper, my impression is that the authors aim to keep the method general-purpose. Indeed, when reviewing relevant work on OOD in Section 6, the authors mentioned that "These methods are constructed specifically for OOD detection whereas our method could be applied to a variety of problems based on the continuous regularizers chosen." With this impression in mind, I am slightly disappointed to find that all experiments in Section 7 are related to OOD detection. What is more, the proposed method is (slightly) worse than contemporary OOD detection baslines. For this reason, I find it difficult to assess the significance of the contribution of this work purely based on its performance on OOD detection. To showcase the claimed versatility of the proposed method, I suggest the authors include tasks other than OOD detection, or perhaps explain why only OOD detection is pertinent to the proposed method. |
ICLR | Title
Dissecting Local Properties of Adversarial Examples
Abstract
Adversarial examples have attracted significant attention over the years, yet a sufficient understanding is in lack, especially when analyzing their performances in combination with adversarial training. In this paper, we revisit some properties of adversarial examples from both frequency and spatial perspectives: 1) the special high-frequency components of adversarial examples tend to mislead naturallytrained models while have little impact on adversarially-trained ones, and 2) adversarial examples show disorderly perturbations on naturally-trained models and locally-consistent (image shape related) perturbations on adversarially-trained ones. Motivated by these, we analyze the fragile tendency of models with the generated adversarial perturbations, and propose a connection with model vulnerability and local intermediate response. That is, a smaller local intermediate response comes along with better model adversarial robustness. To be specific, we demonstrate that: 1) DNNs are naturally fragile at least for large enough local response differences between adversarial/natural examples, 2) and smoother adversariallytrained models can alleviate local response differences with enhanced robustness.
1 INTRODUCTION
Despite deep neural networks (DNNs) perform well in many fields (He et al., 2016; Devlin et al., 2019), their counter-intuitive vulnerability attracts increasing attention, both for safety-critical applications (Sharif et al., 2016) and the black-box mechanism of DNNs (Fazlyab et al., 2019). DNNs have been found vulnerable to adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015), where small perturbations on the input can easily change the predictions of a well-trained DNN with high confidence. In computer vision, adversarial examples exhibit their destructiveness both in the digital world and the physical world (Kurakin et al., 2017).
Since then, how to alleviate the vulnerability of DNN so as to narrow the performance gap between adversarial/natural examples is another key issue. Existing methods including defensive distillation (Papernot et al., 2016) and pixel denoising (Liao et al., 2018) have shown their limitations due to follow-up attack strategies (Carlini & Wagner, 2017) or gradient masking (Athalye et al., 2018). Amongst them, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) and its variants (Zhang et al., 2019; Wang et al., 2020b) indicate their reliable robustness and outperform. Moreover, as a data augmentation method, adversarial training currently seems to rely on additional data (Schmidt et al., 2018; Rebuffi et al., 2021) to further improve robustness, while is sensitive to some basic model hyper-parameters, e.g., weight decay (Pang et al., 2021). Apart from these, the effect of simply early stopping (Rice et al., 2020) even exceeds some promotion methods according to recent benchmarks (Croce & Hein, 2020; Chen & Gu, 2020). These studies arise our curiosity to further explore the relationship between adversarial examples and adversarial training, hoping to provide some new understanding.
Recalling that high-frequency components can be potentially linked to adversarial examples (Wang et al., 2020a; Yin et al., 2019; Harder et al., 2021), however, few explorations discuss the relationship between high-frequency components and the destructiveness of adversarial examples. In this paper, we first demonstrate that high-frequency components of adversarial examples tend to mislead the standard DNNs, yet little impact on the adversarially robust models. We further show that adversarial examples statistically have more high-frequency components than natural ones, indicating relatively drastic local changes among pixels of adversarial examples. Since adversarial examples
exhibit more semantically meaningful on robust models (Tsipras et al., 2019), we further notice that adversarial examples show locally-consistent perturbations related to image shapes on adversariallytrained models, in contrast to disorderly perturbations on standard models. Both explorations on the frequency and spatial domain emphasize local properties of adversarial examples, and motivated by the local receptive field of the convolution kernels, we propose a locally intermediate response perspective to rethink the vulnerability of DNNs. Different from the existing global activation perspective (Bai et al., 2021; Xu et al., 2019), our local perspective reflects the joint effect of local features and the intermediate layers of the model. Based on the local perspective, we emphasize that large enough local response differences make it difficult for the network to treat an image and its potentially adversarial examples as one category, and demonstrate DNN models are naturally fragile at least attributed to it. Motivated by adversarially-trained models tend to have ‘smooth’ kernels (Wang et al., 2020a), we simply use the smoother kernels to alleviate local response differences on adversarially-trained models, which in turn affects the model robustness and reduces the robust overfitting (Rice et al., 2020). To a certain extent, this explains why weight decay effectively affects model robustness (Pang et al., 2021).
Our main contributions are summarized as follows:
• We first reveal some properties of adversarial examples in the frequency and spatial domain: 1) the high-frequency components of adversarial examples tend to mislead naturally-trained DNNs, yet have little impact on adversarially-trained models, and 2) adversarial examples have locally-consistent perturbations on adversarially-trained models, compared with disorderly local perturbations on naturally-trained models.
• Then we introduce local response and emphasize its importance in the model adversarial robustness. That is, naturally-trained DNNs are often fragile, at least for non-ignorable local response differences through the same layer between potentially adversarial examples and natural ones. In contrast, adversarially-trained models effectively alleviate the local response differences. And the smoother adversarially-trained models show better adversarial robustness as they can reduce local response differences.
• Finally we empirically study local response with generated adversarial examples. We further show that, compared with failed attacks, adversarial examples (successful attacks) statistically show larger local response differences with natural examples. Moreover, compared with adversarial examples generated by the model itself, those transferred by other models show markedly smaller local response differences.
2 RELATED WORK
Understandings of model vulnerability. Since the discovery of adversarial examples (Szegedy et al., 2014), a number of understandings on model vulnerability has been developed. For instance, linear property of DNNs (Goodfellow et al., 2015), submanifold (Tanay & Griffin, 2016) and geometry of the manifold (Gilmer et al., 2018) were considered from the high-dimensional perspective; the computing of Lipschitz constant (Szegedy et al., 2014; Fazlyab et al., 2019) and lower/upper bounds (Fawzi et al., 2018; Weng et al., 2018) were considered from the definition of model robustness; non-robust features (Ilyas et al., 2019), high-frequency components (Wang et al., 2020a; Yin et al., 2019), high-rank features (Jere et al., 2020) and high-order interactions among pixels (Ren et al., 2021) were explored adversarial examples from the different perspectives on images, which imply our local perspective; feature denosing (Xie et al., 2019), robust pruning (Madaan & Ju Hwang., 2020) and activation suppressing (Bai et al., 2021) were focused on global intermediate activations of models. On the other hand, taken adversarial training into consideration, Tsipras et al. (2019), Schmidt et al. (2018) and Zhang et al. (2019) explored the trade-off between robustness and accuracy; Wang et al. (2020a) found adversarially-trained models tend to show smooth kernels; Tsipras et al. (2019) and Zhang & Zhu (2018) argued adversarially robust models learned more shape-biased representations.
Different from these studies, we characterize the adversarial examples from both frequency and spatial domain to emphasize local properties of adversarial examples, and propose a locally intermediate response to rethink the vulnerability of DNNs.
Adversarial training. Adversarial training can be seen as a min-max optimization problem:
min θ
1
n n∑ i=1 max x′i∈B(x) L(fθ(x ′ i), yi)
where f denotes a DNN model with parameters θ, and (xi, yi) denotes a pair of a natural example xi and its ground-truth label yi. Given a classification loss L, the inner maximization problem can be regarded as searching for suitable perturbations in boundary B to maximize loss, while the outer minimization problem is to optimize model parameters on adversarial examples {x′i}ni=1 generated from the inner maximization.
3 DIFFERENT PERTURBATIONS AFFECT THE VULNERABILITY OF THE MODEL
In this section, we first investigate adversarial examples in the frequency and spatial domain, and show the connections between models and their adversarial examples. Note that our threat model is a white-box model, and the fact that the adversarial example is only defined by the misleading result under the legal threat model, our findings suggest that the properties of the sampled examples can broadly reflect the fragile tendency of the model.
Setup. We generate `∞ bounded adversarial examples by PGD-10 (maximum perturbation = 8/255 and step size 2/255) with random start for the robust model (Madry et al., 2018). Specifically, we use ResNet-18 (He et al., 2016) as the backbone to train the standard and adversarially-trained models for 100 epochs on CIFAR-10 (Krizhevsky, 2009). Following (Pang et al., 2021), we use the SGD optimizer with momentum 0.9, weight decay 5 × 10−4 and initial learning rate 0.1, with a three-stage learning rate divided by 10 at 50 and 75 epoch respectively. The robustness of both models is evaluated by PGD-20 (step size 1/255).
3.1 THE DESTRUCTIVENESS OF HIGH-FREQUENCY COMPONENTS FROM ADVERSARIES
Inspired by (Wang et al., 2020a), we are naturally curious whether the adversarial examples cause considerable damage to the model mainly because of their high-frequency components. To answer this question, Figure 1 illustrates the trend of model performance and robustness on the test set with the high-frequency components increased (the increase of the filtering scale denotes that more highfrequency components are added to the filtered images). Figure 1(a) shows that, for standard models, the high-frequency components of natural examples promote classification and reach the model performance (green line); on the contrary, the performance of the filtered adversarial examples first rises to get the highest accuracy 47.5%, and then drops rapidly to reach 0.0% (red line). Obviously, in the low-frequency range, the performance of natural and adversarial examples are quite close, yet more high-frequency components widen the difference. That is, the special high-frequency components caused by adversarial perturbations exhibit a clear destructive effect on standard models, and simply filter out them can effectively alleviate the destructiveness of adversaries even on standard models.
However, for robust models, we show that the prediction performance finally reaches robustness without a rapid drop in Figure 1(b). But surprisingly, we find that the performance of filtered adversarial examples in some range exceeds the final robustness 47.5% (red line), reaching a maximum of 51.2%. That is, although these high-frequency components do not exhibit a clear destructive effect, simply filtering out them has a positive impact on alleviating robust overfitting (Rice et al., 2020).
We then swap their high-frequency components between both examples controlled by a frequency threshold in Figure 1(c)-1(d). For merged natural examples with high-frequency components from adversaries, the increase of the frequency threshold controls the accuracy increasing from the model robustness (red line) to the model performance (green line), the opposite occurs on merged adversarial examples. These clearly illustrate the boost effect of the high-frequency components from natural examples and the destructive effect of the high-frequency components from adversarial examples.
To further illustrate, we find that statistically, the main difference in the frequency domain between natural and adversarial examples is concentrated in the high-frequency region in Figure 2. We visualize the logarithmic amplitude spectrum of both examples. Figure 2(a) and 2(b) show that, compare with natural examples’, the high-frequency components of the adversaries are hard to ignore. Figure 2(c) further emphasizes that adversarial examples markedly show more high-frequency components, indicating relatively drastic local changes among pixels. This statistical difference explains the high detection rate of using Magnitude Fourier Spectrum (Harder et al., 2021) to detect adversarial examples. Furthermore, Figure 2(d) and 2(e) show that the high-frequency components of adversarial examples generated by robust models are less than those from standard models, yet still more than natural examples’. Besides, the analysis of filtering out low-frequency components in Figure 6 (Appendix A) also emphasizes our statement. That is, compared with natural examples’, the special high-frequency components of adversarial examples show their serious misleading effects on standard models, yet are not enough to be fully responsible for the destructiveness to robust models.
3.2 DIFFERENT PERTURBATIONS AND THE FRAGILE TENDENCY OF THE MODEL
To answer the different effects of high-frequency components, we directly explore the adversarial examples themselves, e.g., in the spatial domain. We first get well-trained models as above and visualize adversarial perturbations in Figure 3, including the overall perturbations scaled to [0,255], the average perturbations of channels, and perturbations of three channels.
As shown in Figure 3(Right), the perturbations generated by the standard models tend to be locally inconsistent and disordered. This observation is different from Figure 3(Left) that adversarial examples show locally-consistent perturbations related to image shapes on the adversarially-trained models, that is, perturbations tend to be locally co-increasing or co-decreasing on each channel. In more detail, the perturbation of each pixel on a single channel tend to reach the perturbation bound, i.e., +8/255 (the reddest), -8/255 (the bluest), which is counter-intuitive under the iterative attack of PGD-20 and naturally associated with the one-step attack FGSM in Figure 7 (Appendix B). In fact, both attacks show similar perturbations and similar model robustness1, while the former produces more detailed perturbations. Besides, compared with failed attacks in Figure 8 (Appendix B), adversarial examples of successful attacks in Figure 3 show more misleading local perturbations, e.g., the perturbations in the first row are more like a bird (bird wings are added), and the third like a car.
Perturbations in frequency vs. spatial domain. Similar to the different destructive effects of high-frequency components, the perturbations in the spatial domain exhibit a more intuitive difference related to the models. Since more high-frequency components indicate images have relatively drastic local changes, we count that adversarial examples generated by adversarially-trained models show fewer high-frequency components mainly due to their locally-consistent perturbations. These perturbations with smooth local changes imply that simply filtering out high-frequency components has little promotion on robust models, while locally-disordered perturbations imply the effectiveness of filtering out high-frequency components on standard models.
Perturbations vs. fragile trend of models. For a given model, optimization-based attacks attempt to search for special perturbations to maximize the classification loss of adversarial examples. We note that for adversarially-trained models with smooth kernels, the perturbations that tend to maximize loss exhibit a locally-consistent tendency, and for standard models with non-smooth kernels, the perturbations tend to be locally-disordered to maximize loss. This implies a potential connection between the fragile trend of models and their potentially adversarial examples.
4 LOCAL RESPONSES: FURTHER IMPACT OF PERTURBATIONS ON MODELS
In this section, motivated by local properties of adversarial examples and the local receptive field of convolution kernels, we first introduce a locally intermediate response perspective to rethink the vulnerability of the model, and then empirically show the relationship between local responses and the vulnerability of the model.
Different from the existing idea of searching for adversarial examples, we consider under what circumstances the potential examples, refer to all legal variants in boundaryB regardless of whether destructive or not, would show their destructive effects on a well-trained model, that is, the reason why some macro-similar potential examples present prediction results far away from their natural examples. Inspired by local properties of adversarial examples and kernels, we naturally consider whether the destructiveness of potential examples can be viewed from local responses, which reflects the combined effect of the local features and the model property.
Note that under ideal circumstances, given a well-trained model, if the local responses of any examples on a certain layer are completely consistent, then the final predictions of these examples are exactly the same. To relax the condition in reality, we hypothesize as follows (referred to as Assumption 1): If macro-similar features through the same layer exhibit a sufficiently small difference in local responses, with sufficiently small differences in the subsequent layers, then the final responses of the network are relatively close; otherwise, the large enough local differences make the network difficult to treat them as the same category. To illustrate this, we take the convolutional layer as an example.
4.1 LOCAL RESPONSES
Macroscopically, we denote a DNN model f for classification, the l-th layer of activation feature maps f l, and f0 to represent the input features. Macro-similar image and its potential example pass through the l-th layer to get f l and f̂ l respectively. We also denote the (l+1)-th weight com-
1The adversarially-trained model (the best checkpoint) reaches 51.9% robustness on the PGD-20 attack and 57.4% robustness on the FGSM attack.
ponent M l+1 ∈ RH×W×K , where H , W , K represent height, weight and number of kernels respectively. From the locally intermediate response perspective, we first get one of the convolution kernelsml+1 ∈ RH×W , and then capture the l-th layer of local features centered at (i,j) position xli,j and x̂li,j corresponding to its local receptive field. Formally, for the difference of local responses,
∆l+1i,j := x̂ l i,j ⊗ml+1 − xli,j ⊗ml+1 = xdl ⊗ml+1
where ∆l+1i,j denotes the (i,j)-th response difference of local features through the same kernel m l+1, and xdl := x̂li,j−xli,j denotes the difference of local feature maps between its potential example and natural example. The second equation is based on the linear property of the convolution operation.
Ideally, if the differences are all equal to zeros at any positions in the (l+1)-th layer, that is, the intermediate layer has the same utility for both examples, then the potential example after the (l+1)th layer shows exactly the same responses as the natural example’s. However, in most cases the local differences are difficult to be exactly zeros everywhere, then based on Assumption 1, our aim is to make the absolute difference of local responses |∆l+1i,j | as small as possible to approach the model’s cognition of potential example and natural example.
The difference of local responses ∆l+1i,j composed of xd l and ml+1 can be further expressed as:
xd l ⊗ml+1 = H∑ i=1 W∑ j=1 clijm l+1 ij
where clij and m l+1 ij represent the (i,j)-th elements of xd l and ml+1 respectively, thus the absolute local difference |∆l+1i,j | is affected jointly by both local features and kernel parameters. Note that what we care about is under what circumstances are more likely to produce large enough response differences. Though the specific impact of clij and m l+1 ij on ∆ l+1 i,j is quite complicated, we consider statistical circumstances as numerous convolution kernels and various local features are combined to affect the response differences in real networks. Statistically, the larger amplitude of ml+1 with fixed xdl, or larger amplitude of xdl with fixed ml+1, tends to produce larger |∆l+1i,j |. Besides, if a kernel ml+1 is relatively non-smooth, then non-smooth local features xdl , rather than smooth local features, are more likely to produce large enough local response differences.
4.2 FURTHER IMPACT OF LOCAL PERTURBATIONS ON MODELS
Recalling that local properties of adversarial examples in Section 3, we further explore how different local perturbations affect the models from the locally intermediate response perspective. To illustrate the effect of perturbations, we assume two convolution kernels with the same mean but different variances, one is disordered, and the other is smoother.
Local responses vs. locally-disordered perturbations. Since adversarial perturbations generated by standard models tend to be locally-disordered, we discuss these perturbations affect the local responses. Locally-disordered perturbations indicate relatively drastic local changes among pixels of potential example, in other words, local input difference xd0 of potential example and natural example (also refers to local perturbations) show its great variance. These perturbations are convolved on numerous kernels. As shown in Figure 4, suppose convolution kernels with the same mean but greater variance, these locally-disordered perturbations are more likely to yield large enough absolute response differences |∆1i,j | in the first layer, since the non-smooth kernels tend to emphasize the relationship between different pixels. On the other hand, suppose smoother kernels with relatively small variance, these locally-disordered perturbations tend to show smaller absolute response differences since smoother kernels tend to focus on the average situation within their local receptive fields. That is, locally-disordered perturbations pass through different types of convolution kernels tend to produce different response differences, which can be accumulated in the subsequent layers.
Local responses vs. locally-consistent perturbations. Since adversarial perturbations generated by adversarially robust models tend to be locally-consistent, we count that such perturbations are more destructive to smoother kernels, comparing with locally-disordered perturbations in Figure 4. Locally-consistent perturbations xd0 reaching the perturbation bound tend to produce larger absolute response differences in the first layer since they tend to show a larger absolute average, yet locallydisordered perturbations show smaller response differences due to a smaller absolute average.
Both different local perturbations have different impacts on the local response differences of potential examples and natural examples, implying the fragile trend of models and the transferability of adversarial examples.
4.3 EMPIRICAL UNDERSTANDING OF LOCAL RESPONSES
Motivated by adversarially-trained models that tend to show smoother kernels (Wang et al., 2020a), we first provide a further understanding of local responses and then give empirical understandings.
Local responses vs. model smoothness. Given a non-smooth convolution kernel ml+1 with great variance, it is more likely to cause huge impact on an absolute response difference |∆l+1i,j | within its local receptive field when fixing xdl. Considering numerous non-smooth kernels and various local features are combined in the same layer, it is quite easy to yield large enough response differences. More complicated is the subsequent layers act on the current local response differences and get intricate accumulation effects. That is, for a standard model with non-smooth kernels, the model itself tends to amplify the local response differences between potential examples and natural examples. On the other hand, since an adversarially-trained model show smoother kernels, the model has a tendency to weaken the local response differences and narrow the final responses of potential examples and natural examples, indicating a trade-off between model robustness and accuracy.
Setup. We get both standard and adversarially-trained models in Section 3. Considering complex functional layers, including linear and nonlinear ones in DNNs, based on Assumption 1, we use the maximum and the total absolute differences of local responses in some layers to show their effects. Taking ResNet-18 as an example, we investigate some layers including the first convolutional layer as Conv 1, the feature maps before layer 1 as Layer 0, and Layer 1 to Layer 4 respectively1.
To verify above analysis, we first compare the local response differences of potential examples and natural examples between standard and robust models. As Table 1 shows, the standard model exhibits significantly larger differences in local responses layer by layer, especially from the perspective of total differences yielded by per pairs. These large enough response differences accumulate, making the model’s cognition of potential examples far away from their natural ones and ultimately leading the model to express a high vulnerability. On the other hand, the adversarially-trained model significantly shortens local response differences in corresponding layers, yet still some differences exist. In other words, how to further reduce the local response differences is a perspective of approaching model robustness to performance. We also note that, due to the complex effect of nonlinear layers (e.g., BatchNorm, ReLu), the maximum absolute difference of local responses does not strictly increase monotonically but increases in trend.
1Feature map size of Conv 1, Layer 0 and Layer 1: 64 × 32 × 32, Layer 2: 128 × 16 × 16, Layer 3: 256× 8× 8, and Layer 4: 512× 4× 4.
Local responses vs. destructive effect. Naturally, we wonder whether a clear difference between potential examples of successful and failed attacks on a certain model exists. We first select the natural examples that are classified correctly, and count their potential examples of successful and failed attacks. As Table 2 shows1, for the standard model, the successful ones show similar differences with the failed ones in the front layers, but greater differences especially in Layer 4 close to the outputs, leading to finally destructive effects. However, for the adversarially-trained model in Table 4 (Appendix C), similar differences even in Layer 4 may be due to the smoother kernels and locally smoother perturbations, leading to the final responses of both close to natural examples’.
Local responses vs. transferability. Different local perturbations related to models imply the difficulty of adversarial examples’ transferability, then we further explore whether the transferability can be understood from the locally intermediate response perspective. To verify the analysis of Section 4.2, we exchange the adversaries obtained from the standard and robust model. Table 3 shows, locally-disordered perturbations, combined with a smoother adversarially-trained model, exhibit fairly small local response differences layer by layer, making the model’s cognition close enough to the natural examples and leading to 83.6% robustness close enough to 84.9% model performance. Similar situations occur when locally-consistent perturbations combined with a non-smooth standard model in Table 5 (Appendix C). These results indicate that the searched perturbations related to models tend to amplify response differences as much as possible to enlarge the model’s cognition of potential examples and natural examples, and the weak transferability of adversarial examples may be due to transferred perturbations that are difficult to enlarge the model’s cognition.
4.4 SMOOTHER KERNELS: ALLEVIATE LOCAL RESPONSE DIFFERENCES
To further exhibit the effect of shortening local response differences, we simply show smoother adversarially robust models can alleviate local response differences and then improve their robustness. As shown in Figure 5, adversarially-trained models with different smoothness (i.e., larger weight decay parameters show the smaller magnitude of the kernels (Loshchilov & Hutter, 2019)) show different local response differences. Figures 5(b)-5(g) for the maximum differences and Figure 9 for the total differences (Appendix C) indicate that smoother kernels slightly weaken the local response differences between the potential examples and natural examples layer by layer, and finally tend to narrow the robustness and performance in Figure 5(h). This to some extent explains adversariallytrained models are more sensitive to weight decay (Pang et al., 2021). On the other hand, we find that the increase of weight decay is quite difficult to further reduce the magnitude of parameters and the local response differences in each layer, which may be one of the reasons for the current bottleneck in the robustness of the adversarial training. Besides, the increase of weight decay can effectively weaken the robust overfitting (Rice et al., 2020) in Figure 5(h) and Figure 9(g).
1Note that under the PGD-20 attack, the standard model hardly yields examples of failed attacks, then we use the FGSM attack to illustrate the problem.
4.5 DISCUSSION: LOCAL RESPONSES AND MODEL ROBUSTNESS
The above suggests that the model robustness is related to the model itself and the property of potential examples, while they are combined through the local responses. Due to enough differences in local responses, some potential examples are not regarded as similar to natural examples (or their predictions are different from the natural ones with correct predictions), so they eventually show their destructive effects as adversarial examples. That is, if a model tends to weaken the local response differences of potential and natural examples, then the model exhibits great robustness as the final responses of the potential examples are more likely to be close to the natural ones.
On the other hand, though small differences of local responses make the network more inclined to treat both examples as the same categories, but the existing differences, especially the intricate differences after multi-layer accumulation emphasize the difficulty of approaching model robustness to performance, and demonstrate that DNN models are naturally fragile. Essentially, these response differences come from whether a non-zero convolution kernel acts on all legal perturbations in boundary B to obtain differences that are all zeros or sufficiently small without further accumulation. In other words, it tells the model robustness is that, given legal potential examples from any attacks, the accumulated response differences can be alleviated or removed. For instance, feature denosing (Xie et al., 2019) and activation suppressing (Bai et al., 2021) can be viewed as shortening local response differences of both examples to improve the model robustness.
Besides, shortening local response differences does not directly mean an improvement of the model’s robustness, in fact, it expresses the closeness of the model’s cognition on both examples. That is, a poorly-trained model may also treat both as relatively close, but give a bad model performance. To further improve the model robustness, a more realistic idea is to find proper parameters (whether theoretical parameters to minimize the local response differences exist) or a new method (nonlinear layers, loss functions, model structures) to shorten the response differences as much as possible, while not overly weaken the classification performance of the model.
5 CONCLUSION
In this paper, we investigate the local properties of adversarial examples generated by different models and rethink the vulnerability of DNNs from a novel locally intermediate response perspective. We find that the high-frequency components of adversarial examples tend to mislead standard DNNs, but have little impact on adversarially-trained models. Furthermore, locally-disordered perturbations are shown on standard models, but locally-consistent perturbations on adversarially-trained models. Both explorations emphasize the local perspective and the potential relationship between models and adversarial examples, then we explore how different local perturbations affect the models. We demonstrate DNN models are naturally fragile at least for large enough local response differences between potentially adversarial examples and natural examples, and empirically show smoother adversarially-trained models can alleviate local response differences to improve robustness.
A THE DESTRUCTIVENESS OF ADVERSARIAL EXAMPLES ON FREQUENCY DOMAIN
Here, we further investigate the contribution of only high-frequency components to the destructiveness of both examples. We get both standard and adversarially-trained models as above. Figure 6 illustrates the trend of model performance and robustness on the test set with the low-frequency components decreased (the increase of the filtering scale denotes that the less low-frequency components are added to the filtered images). As the increase of filtering scale, only natural examples on standard models keep certain classification performance and then decrease to reach 10% (the accuracy of random classification). That is, the high-frequency components of natural examples to some extent can promote classification. On the other hand, the high-frequency components of adversarial examples on standard models show almost no promotion but destructive effect, and finally reach 10% as well. For robust models, the high-frequency components of both examples show similar but little performance.
B THE LOCAL PROPERTIES OF ADVERSARIAL EXAMPLES ON SPATIAL DOMAIN
We further explore the local properties of adversarial examples generated by the FGSM attack. For adversarially-trained models, compared with the PGD attack in Figure 3, the adversarial examples generated by the FGSM attack in Figure 7 show similar perturbations and similar model robustness (51.9% robustness for PGD-20 and 57.4% robustness for FGSM), yet the latter produces less detailed perburbations. For standard models, though both attacks show locally-disordered perturbations, the latter exhibits less disordered perturbations since less perturbation values can be achieved, i.e., +8/255, 0, and -8/255.
Similar to adversarial examples of successful attacks in Figure 3, the failed attacks in Figure 8 show locally-consistent perturbations related to image shapes on the adversarially-trained models as well. The difference is that, these examples show less misleading local perturbations since the shape of their perturbations is closer to the shape of the original image.
C LOCAL RESPONSES
As mentioned in Section 4.3, we report destructive effects on the adversarially-trained model in Table 4. We first select the natural examples that are classified correctly, and count their potential examples of successful and failed attacks. Similar to the standard model, the successful ones show similar differences with the failed ones in the front layers, but greater differences in Layer 4 close to the outputs. The difference is that, due to the smoother kernels and locally smoother perturbations, the robust model exhibits a closer local response differences in Layer 4 between the successful ones and the failed ones. Besides, compared with the PGD attack, the adversarial examples generated by the FGSM attack show similar differences in the front layers, but less local response differences in Layer 4, leading to higher model robustness.
We further report the transferability of adversarial examples on the standard model, as shown in Table 5. We get the potentially adversarial examples obtained from the robust model to attack the standard model. That is, locally-consistent perturbations, combined with a non-smooth standard model, exhibit small local response differences layer by layer, making the model’s cognition close enough to the natural examples and leading to 78.4% robustness close to 94.4% model performance.
As mentioned in Section 4.4, Figure 9 is a supplement to Figure 5 from the total absolute local response differences perspective, which indicates that smaller local response differences tend to approach model robustness to performance as well. | 1. What is the focus of the paper regarding adversarial examples?
2. What are the strengths and weaknesses of the paper's observations and evaluations?
3. Do you have any questions regarding the relationship between frequency and spatial domain in the context of adversarial perturbations?
4. How does the reviewer assess the effectiveness of the proposed smoother adversarially-trained models compared to state-of-the-art methods?
5. What are some concerns regarding the terminology used in the paper, such as "local changes," "locally consistent," and "fragile trend"? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies the properties of adversarial examples from a spatial and frequency perspective and shows that naturally trained models are more vulnerable to high-frequency components in adversarial examples. Perturbations for naturally trained models are disordered, but perturbations for adv-trained models are image shape related. Based on these observations, the authors find that smoother adversarially-trained can achieve better robustness.
Review
Strength:
This paper studies an interesting problem: the connection between adversarial perturbations and local properties of adversarial examples (how perturbations associate to image shape).
Weakness:
The observation and evaluation of this paper lack novelty and are not so surprising to me. For example, adversarial examples containing high-frequency perturbations is also studied in [1]. Adv-trained models are more robust than naturally trained ones, so they are supposed to be more robust to these high-frequency components in adversarial perturbations.
The explanation of the relationship between frequency and spatial domain is not quite clear to me. Why do high-frequency components indicate images have relatively drastic local changes? How are the high-frequency components related to local changes in adversarial examples?
The paper argues that the proposed smoother adversarially-trained models can achieve better robustness, but there is no comparison to the SOTA adversarial training work to show the improvement in the robustness. A quick comparison could be made with some works listed in https://robustbench.github.io/.
The terminology, like "local changes" "locally consistent", has a high frequency in this paper, but I didn't find the definition or the explanation for what 'local' means. This makes the paper a bit vague to understand. Besides, some other words like "fragile trend" are also vague.
[1] Yin, Dong, et al. "A fourier perspective on model robustness in computer vision." arXiv preprint arXiv:1906.08988 (2019). |
ICLR | Title
Dissecting Local Properties of Adversarial Examples
Abstract
Adversarial examples have attracted significant attention over the years, yet a sufficient understanding is in lack, especially when analyzing their performances in combination with adversarial training. In this paper, we revisit some properties of adversarial examples from both frequency and spatial perspectives: 1) the special high-frequency components of adversarial examples tend to mislead naturallytrained models while have little impact on adversarially-trained ones, and 2) adversarial examples show disorderly perturbations on naturally-trained models and locally-consistent (image shape related) perturbations on adversarially-trained ones. Motivated by these, we analyze the fragile tendency of models with the generated adversarial perturbations, and propose a connection with model vulnerability and local intermediate response. That is, a smaller local intermediate response comes along with better model adversarial robustness. To be specific, we demonstrate that: 1) DNNs are naturally fragile at least for large enough local response differences between adversarial/natural examples, 2) and smoother adversariallytrained models can alleviate local response differences with enhanced robustness.
1 INTRODUCTION
Despite deep neural networks (DNNs) perform well in many fields (He et al., 2016; Devlin et al., 2019), their counter-intuitive vulnerability attracts increasing attention, both for safety-critical applications (Sharif et al., 2016) and the black-box mechanism of DNNs (Fazlyab et al., 2019). DNNs have been found vulnerable to adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015), where small perturbations on the input can easily change the predictions of a well-trained DNN with high confidence. In computer vision, adversarial examples exhibit their destructiveness both in the digital world and the physical world (Kurakin et al., 2017).
Since then, how to alleviate the vulnerability of DNN so as to narrow the performance gap between adversarial/natural examples is another key issue. Existing methods including defensive distillation (Papernot et al., 2016) and pixel denoising (Liao et al., 2018) have shown their limitations due to follow-up attack strategies (Carlini & Wagner, 2017) or gradient masking (Athalye et al., 2018). Amongst them, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) and its variants (Zhang et al., 2019; Wang et al., 2020b) indicate their reliable robustness and outperform. Moreover, as a data augmentation method, adversarial training currently seems to rely on additional data (Schmidt et al., 2018; Rebuffi et al., 2021) to further improve robustness, while is sensitive to some basic model hyper-parameters, e.g., weight decay (Pang et al., 2021). Apart from these, the effect of simply early stopping (Rice et al., 2020) even exceeds some promotion methods according to recent benchmarks (Croce & Hein, 2020; Chen & Gu, 2020). These studies arise our curiosity to further explore the relationship between adversarial examples and adversarial training, hoping to provide some new understanding.
Recalling that high-frequency components can be potentially linked to adversarial examples (Wang et al., 2020a; Yin et al., 2019; Harder et al., 2021), however, few explorations discuss the relationship between high-frequency components and the destructiveness of adversarial examples. In this paper, we first demonstrate that high-frequency components of adversarial examples tend to mislead the standard DNNs, yet little impact on the adversarially robust models. We further show that adversarial examples statistically have more high-frequency components than natural ones, indicating relatively drastic local changes among pixels of adversarial examples. Since adversarial examples
exhibit more semantically meaningful on robust models (Tsipras et al., 2019), we further notice that adversarial examples show locally-consistent perturbations related to image shapes on adversariallytrained models, in contrast to disorderly perturbations on standard models. Both explorations on the frequency and spatial domain emphasize local properties of adversarial examples, and motivated by the local receptive field of the convolution kernels, we propose a locally intermediate response perspective to rethink the vulnerability of DNNs. Different from the existing global activation perspective (Bai et al., 2021; Xu et al., 2019), our local perspective reflects the joint effect of local features and the intermediate layers of the model. Based on the local perspective, we emphasize that large enough local response differences make it difficult for the network to treat an image and its potentially adversarial examples as one category, and demonstrate DNN models are naturally fragile at least attributed to it. Motivated by adversarially-trained models tend to have ‘smooth’ kernels (Wang et al., 2020a), we simply use the smoother kernels to alleviate local response differences on adversarially-trained models, which in turn affects the model robustness and reduces the robust overfitting (Rice et al., 2020). To a certain extent, this explains why weight decay effectively affects model robustness (Pang et al., 2021).
Our main contributions are summarized as follows:
• We first reveal some properties of adversarial examples in the frequency and spatial domain: 1) the high-frequency components of adversarial examples tend to mislead naturally-trained DNNs, yet have little impact on adversarially-trained models, and 2) adversarial examples have locally-consistent perturbations on adversarially-trained models, compared with disorderly local perturbations on naturally-trained models.
• Then we introduce local response and emphasize its importance in the model adversarial robustness. That is, naturally-trained DNNs are often fragile, at least for non-ignorable local response differences through the same layer between potentially adversarial examples and natural ones. In contrast, adversarially-trained models effectively alleviate the local response differences. And the smoother adversarially-trained models show better adversarial robustness as they can reduce local response differences.
• Finally we empirically study local response with generated adversarial examples. We further show that, compared with failed attacks, adversarial examples (successful attacks) statistically show larger local response differences with natural examples. Moreover, compared with adversarial examples generated by the model itself, those transferred by other models show markedly smaller local response differences.
2 RELATED WORK
Understandings of model vulnerability. Since the discovery of adversarial examples (Szegedy et al., 2014), a number of understandings on model vulnerability has been developed. For instance, linear property of DNNs (Goodfellow et al., 2015), submanifold (Tanay & Griffin, 2016) and geometry of the manifold (Gilmer et al., 2018) were considered from the high-dimensional perspective; the computing of Lipschitz constant (Szegedy et al., 2014; Fazlyab et al., 2019) and lower/upper bounds (Fawzi et al., 2018; Weng et al., 2018) were considered from the definition of model robustness; non-robust features (Ilyas et al., 2019), high-frequency components (Wang et al., 2020a; Yin et al., 2019), high-rank features (Jere et al., 2020) and high-order interactions among pixels (Ren et al., 2021) were explored adversarial examples from the different perspectives on images, which imply our local perspective; feature denosing (Xie et al., 2019), robust pruning (Madaan & Ju Hwang., 2020) and activation suppressing (Bai et al., 2021) were focused on global intermediate activations of models. On the other hand, taken adversarial training into consideration, Tsipras et al. (2019), Schmidt et al. (2018) and Zhang et al. (2019) explored the trade-off between robustness and accuracy; Wang et al. (2020a) found adversarially-trained models tend to show smooth kernels; Tsipras et al. (2019) and Zhang & Zhu (2018) argued adversarially robust models learned more shape-biased representations.
Different from these studies, we characterize the adversarial examples from both frequency and spatial domain to emphasize local properties of adversarial examples, and propose a locally intermediate response to rethink the vulnerability of DNNs.
Adversarial training. Adversarial training can be seen as a min-max optimization problem:
min θ
1
n n∑ i=1 max x′i∈B(x) L(fθ(x ′ i), yi)
where f denotes a DNN model with parameters θ, and (xi, yi) denotes a pair of a natural example xi and its ground-truth label yi. Given a classification loss L, the inner maximization problem can be regarded as searching for suitable perturbations in boundary B to maximize loss, while the outer minimization problem is to optimize model parameters on adversarial examples {x′i}ni=1 generated from the inner maximization.
3 DIFFERENT PERTURBATIONS AFFECT THE VULNERABILITY OF THE MODEL
In this section, we first investigate adversarial examples in the frequency and spatial domain, and show the connections between models and their adversarial examples. Note that our threat model is a white-box model, and the fact that the adversarial example is only defined by the misleading result under the legal threat model, our findings suggest that the properties of the sampled examples can broadly reflect the fragile tendency of the model.
Setup. We generate `∞ bounded adversarial examples by PGD-10 (maximum perturbation = 8/255 and step size 2/255) with random start for the robust model (Madry et al., 2018). Specifically, we use ResNet-18 (He et al., 2016) as the backbone to train the standard and adversarially-trained models for 100 epochs on CIFAR-10 (Krizhevsky, 2009). Following (Pang et al., 2021), we use the SGD optimizer with momentum 0.9, weight decay 5 × 10−4 and initial learning rate 0.1, with a three-stage learning rate divided by 10 at 50 and 75 epoch respectively. The robustness of both models is evaluated by PGD-20 (step size 1/255).
3.1 THE DESTRUCTIVENESS OF HIGH-FREQUENCY COMPONENTS FROM ADVERSARIES
Inspired by (Wang et al., 2020a), we are naturally curious whether the adversarial examples cause considerable damage to the model mainly because of their high-frequency components. To answer this question, Figure 1 illustrates the trend of model performance and robustness on the test set with the high-frequency components increased (the increase of the filtering scale denotes that more highfrequency components are added to the filtered images). Figure 1(a) shows that, for standard models, the high-frequency components of natural examples promote classification and reach the model performance (green line); on the contrary, the performance of the filtered adversarial examples first rises to get the highest accuracy 47.5%, and then drops rapidly to reach 0.0% (red line). Obviously, in the low-frequency range, the performance of natural and adversarial examples are quite close, yet more high-frequency components widen the difference. That is, the special high-frequency components caused by adversarial perturbations exhibit a clear destructive effect on standard models, and simply filter out them can effectively alleviate the destructiveness of adversaries even on standard models.
However, for robust models, we show that the prediction performance finally reaches robustness without a rapid drop in Figure 1(b). But surprisingly, we find that the performance of filtered adversarial examples in some range exceeds the final robustness 47.5% (red line), reaching a maximum of 51.2%. That is, although these high-frequency components do not exhibit a clear destructive effect, simply filtering out them has a positive impact on alleviating robust overfitting (Rice et al., 2020).
We then swap their high-frequency components between both examples controlled by a frequency threshold in Figure 1(c)-1(d). For merged natural examples with high-frequency components from adversaries, the increase of the frequency threshold controls the accuracy increasing from the model robustness (red line) to the model performance (green line), the opposite occurs on merged adversarial examples. These clearly illustrate the boost effect of the high-frequency components from natural examples and the destructive effect of the high-frequency components from adversarial examples.
To further illustrate, we find that statistically, the main difference in the frequency domain between natural and adversarial examples is concentrated in the high-frequency region in Figure 2. We visualize the logarithmic amplitude spectrum of both examples. Figure 2(a) and 2(b) show that, compare with natural examples’, the high-frequency components of the adversaries are hard to ignore. Figure 2(c) further emphasizes that adversarial examples markedly show more high-frequency components, indicating relatively drastic local changes among pixels. This statistical difference explains the high detection rate of using Magnitude Fourier Spectrum (Harder et al., 2021) to detect adversarial examples. Furthermore, Figure 2(d) and 2(e) show that the high-frequency components of adversarial examples generated by robust models are less than those from standard models, yet still more than natural examples’. Besides, the analysis of filtering out low-frequency components in Figure 6 (Appendix A) also emphasizes our statement. That is, compared with natural examples’, the special high-frequency components of adversarial examples show their serious misleading effects on standard models, yet are not enough to be fully responsible for the destructiveness to robust models.
3.2 DIFFERENT PERTURBATIONS AND THE FRAGILE TENDENCY OF THE MODEL
To answer the different effects of high-frequency components, we directly explore the adversarial examples themselves, e.g., in the spatial domain. We first get well-trained models as above and visualize adversarial perturbations in Figure 3, including the overall perturbations scaled to [0,255], the average perturbations of channels, and perturbations of three channels.
As shown in Figure 3(Right), the perturbations generated by the standard models tend to be locally inconsistent and disordered. This observation is different from Figure 3(Left) that adversarial examples show locally-consistent perturbations related to image shapes on the adversarially-trained models, that is, perturbations tend to be locally co-increasing or co-decreasing on each channel. In more detail, the perturbation of each pixel on a single channel tend to reach the perturbation bound, i.e., +8/255 (the reddest), -8/255 (the bluest), which is counter-intuitive under the iterative attack of PGD-20 and naturally associated with the one-step attack FGSM in Figure 7 (Appendix B). In fact, both attacks show similar perturbations and similar model robustness1, while the former produces more detailed perturbations. Besides, compared with failed attacks in Figure 8 (Appendix B), adversarial examples of successful attacks in Figure 3 show more misleading local perturbations, e.g., the perturbations in the first row are more like a bird (bird wings are added), and the third like a car.
Perturbations in frequency vs. spatial domain. Similar to the different destructive effects of high-frequency components, the perturbations in the spatial domain exhibit a more intuitive difference related to the models. Since more high-frequency components indicate images have relatively drastic local changes, we count that adversarial examples generated by adversarially-trained models show fewer high-frequency components mainly due to their locally-consistent perturbations. These perturbations with smooth local changes imply that simply filtering out high-frequency components has little promotion on robust models, while locally-disordered perturbations imply the effectiveness of filtering out high-frequency components on standard models.
Perturbations vs. fragile trend of models. For a given model, optimization-based attacks attempt to search for special perturbations to maximize the classification loss of adversarial examples. We note that for adversarially-trained models with smooth kernels, the perturbations that tend to maximize loss exhibit a locally-consistent tendency, and for standard models with non-smooth kernels, the perturbations tend to be locally-disordered to maximize loss. This implies a potential connection between the fragile trend of models and their potentially adversarial examples.
4 LOCAL RESPONSES: FURTHER IMPACT OF PERTURBATIONS ON MODELS
In this section, motivated by local properties of adversarial examples and the local receptive field of convolution kernels, we first introduce a locally intermediate response perspective to rethink the vulnerability of the model, and then empirically show the relationship between local responses and the vulnerability of the model.
Different from the existing idea of searching for adversarial examples, we consider under what circumstances the potential examples, refer to all legal variants in boundaryB regardless of whether destructive or not, would show their destructive effects on a well-trained model, that is, the reason why some macro-similar potential examples present prediction results far away from their natural examples. Inspired by local properties of adversarial examples and kernels, we naturally consider whether the destructiveness of potential examples can be viewed from local responses, which reflects the combined effect of the local features and the model property.
Note that under ideal circumstances, given a well-trained model, if the local responses of any examples on a certain layer are completely consistent, then the final predictions of these examples are exactly the same. To relax the condition in reality, we hypothesize as follows (referred to as Assumption 1): If macro-similar features through the same layer exhibit a sufficiently small difference in local responses, with sufficiently small differences in the subsequent layers, then the final responses of the network are relatively close; otherwise, the large enough local differences make the network difficult to treat them as the same category. To illustrate this, we take the convolutional layer as an example.
4.1 LOCAL RESPONSES
Macroscopically, we denote a DNN model f for classification, the l-th layer of activation feature maps f l, and f0 to represent the input features. Macro-similar image and its potential example pass through the l-th layer to get f l and f̂ l respectively. We also denote the (l+1)-th weight com-
1The adversarially-trained model (the best checkpoint) reaches 51.9% robustness on the PGD-20 attack and 57.4% robustness on the FGSM attack.
ponent M l+1 ∈ RH×W×K , where H , W , K represent height, weight and number of kernels respectively. From the locally intermediate response perspective, we first get one of the convolution kernelsml+1 ∈ RH×W , and then capture the l-th layer of local features centered at (i,j) position xli,j and x̂li,j corresponding to its local receptive field. Formally, for the difference of local responses,
∆l+1i,j := x̂ l i,j ⊗ml+1 − xli,j ⊗ml+1 = xdl ⊗ml+1
where ∆l+1i,j denotes the (i,j)-th response difference of local features through the same kernel m l+1, and xdl := x̂li,j−xli,j denotes the difference of local feature maps between its potential example and natural example. The second equation is based on the linear property of the convolution operation.
Ideally, if the differences are all equal to zeros at any positions in the (l+1)-th layer, that is, the intermediate layer has the same utility for both examples, then the potential example after the (l+1)th layer shows exactly the same responses as the natural example’s. However, in most cases the local differences are difficult to be exactly zeros everywhere, then based on Assumption 1, our aim is to make the absolute difference of local responses |∆l+1i,j | as small as possible to approach the model’s cognition of potential example and natural example.
The difference of local responses ∆l+1i,j composed of xd l and ml+1 can be further expressed as:
xd l ⊗ml+1 = H∑ i=1 W∑ j=1 clijm l+1 ij
where clij and m l+1 ij represent the (i,j)-th elements of xd l and ml+1 respectively, thus the absolute local difference |∆l+1i,j | is affected jointly by both local features and kernel parameters. Note that what we care about is under what circumstances are more likely to produce large enough response differences. Though the specific impact of clij and m l+1 ij on ∆ l+1 i,j is quite complicated, we consider statistical circumstances as numerous convolution kernels and various local features are combined to affect the response differences in real networks. Statistically, the larger amplitude of ml+1 with fixed xdl, or larger amplitude of xdl with fixed ml+1, tends to produce larger |∆l+1i,j |. Besides, if a kernel ml+1 is relatively non-smooth, then non-smooth local features xdl , rather than smooth local features, are more likely to produce large enough local response differences.
4.2 FURTHER IMPACT OF LOCAL PERTURBATIONS ON MODELS
Recalling that local properties of adversarial examples in Section 3, we further explore how different local perturbations affect the models from the locally intermediate response perspective. To illustrate the effect of perturbations, we assume two convolution kernels with the same mean but different variances, one is disordered, and the other is smoother.
Local responses vs. locally-disordered perturbations. Since adversarial perturbations generated by standard models tend to be locally-disordered, we discuss these perturbations affect the local responses. Locally-disordered perturbations indicate relatively drastic local changes among pixels of potential example, in other words, local input difference xd0 of potential example and natural example (also refers to local perturbations) show its great variance. These perturbations are convolved on numerous kernels. As shown in Figure 4, suppose convolution kernels with the same mean but greater variance, these locally-disordered perturbations are more likely to yield large enough absolute response differences |∆1i,j | in the first layer, since the non-smooth kernels tend to emphasize the relationship between different pixels. On the other hand, suppose smoother kernels with relatively small variance, these locally-disordered perturbations tend to show smaller absolute response differences since smoother kernels tend to focus on the average situation within their local receptive fields. That is, locally-disordered perturbations pass through different types of convolution kernels tend to produce different response differences, which can be accumulated in the subsequent layers.
Local responses vs. locally-consistent perturbations. Since adversarial perturbations generated by adversarially robust models tend to be locally-consistent, we count that such perturbations are more destructive to smoother kernels, comparing with locally-disordered perturbations in Figure 4. Locally-consistent perturbations xd0 reaching the perturbation bound tend to produce larger absolute response differences in the first layer since they tend to show a larger absolute average, yet locallydisordered perturbations show smaller response differences due to a smaller absolute average.
Both different local perturbations have different impacts on the local response differences of potential examples and natural examples, implying the fragile trend of models and the transferability of adversarial examples.
4.3 EMPIRICAL UNDERSTANDING OF LOCAL RESPONSES
Motivated by adversarially-trained models that tend to show smoother kernels (Wang et al., 2020a), we first provide a further understanding of local responses and then give empirical understandings.
Local responses vs. model smoothness. Given a non-smooth convolution kernel ml+1 with great variance, it is more likely to cause huge impact on an absolute response difference |∆l+1i,j | within its local receptive field when fixing xdl. Considering numerous non-smooth kernels and various local features are combined in the same layer, it is quite easy to yield large enough response differences. More complicated is the subsequent layers act on the current local response differences and get intricate accumulation effects. That is, for a standard model with non-smooth kernels, the model itself tends to amplify the local response differences between potential examples and natural examples. On the other hand, since an adversarially-trained model show smoother kernels, the model has a tendency to weaken the local response differences and narrow the final responses of potential examples and natural examples, indicating a trade-off between model robustness and accuracy.
Setup. We get both standard and adversarially-trained models in Section 3. Considering complex functional layers, including linear and nonlinear ones in DNNs, based on Assumption 1, we use the maximum and the total absolute differences of local responses in some layers to show their effects. Taking ResNet-18 as an example, we investigate some layers including the first convolutional layer as Conv 1, the feature maps before layer 1 as Layer 0, and Layer 1 to Layer 4 respectively1.
To verify above analysis, we first compare the local response differences of potential examples and natural examples between standard and robust models. As Table 1 shows, the standard model exhibits significantly larger differences in local responses layer by layer, especially from the perspective of total differences yielded by per pairs. These large enough response differences accumulate, making the model’s cognition of potential examples far away from their natural ones and ultimately leading the model to express a high vulnerability. On the other hand, the adversarially-trained model significantly shortens local response differences in corresponding layers, yet still some differences exist. In other words, how to further reduce the local response differences is a perspective of approaching model robustness to performance. We also note that, due to the complex effect of nonlinear layers (e.g., BatchNorm, ReLu), the maximum absolute difference of local responses does not strictly increase monotonically but increases in trend.
1Feature map size of Conv 1, Layer 0 and Layer 1: 64 × 32 × 32, Layer 2: 128 × 16 × 16, Layer 3: 256× 8× 8, and Layer 4: 512× 4× 4.
Local responses vs. destructive effect. Naturally, we wonder whether a clear difference between potential examples of successful and failed attacks on a certain model exists. We first select the natural examples that are classified correctly, and count their potential examples of successful and failed attacks. As Table 2 shows1, for the standard model, the successful ones show similar differences with the failed ones in the front layers, but greater differences especially in Layer 4 close to the outputs, leading to finally destructive effects. However, for the adversarially-trained model in Table 4 (Appendix C), similar differences even in Layer 4 may be due to the smoother kernels and locally smoother perturbations, leading to the final responses of both close to natural examples’.
Local responses vs. transferability. Different local perturbations related to models imply the difficulty of adversarial examples’ transferability, then we further explore whether the transferability can be understood from the locally intermediate response perspective. To verify the analysis of Section 4.2, we exchange the adversaries obtained from the standard and robust model. Table 3 shows, locally-disordered perturbations, combined with a smoother adversarially-trained model, exhibit fairly small local response differences layer by layer, making the model’s cognition close enough to the natural examples and leading to 83.6% robustness close enough to 84.9% model performance. Similar situations occur when locally-consistent perturbations combined with a non-smooth standard model in Table 5 (Appendix C). These results indicate that the searched perturbations related to models tend to amplify response differences as much as possible to enlarge the model’s cognition of potential examples and natural examples, and the weak transferability of adversarial examples may be due to transferred perturbations that are difficult to enlarge the model’s cognition.
4.4 SMOOTHER KERNELS: ALLEVIATE LOCAL RESPONSE DIFFERENCES
To further exhibit the effect of shortening local response differences, we simply show smoother adversarially robust models can alleviate local response differences and then improve their robustness. As shown in Figure 5, adversarially-trained models with different smoothness (i.e., larger weight decay parameters show the smaller magnitude of the kernels (Loshchilov & Hutter, 2019)) show different local response differences. Figures 5(b)-5(g) for the maximum differences and Figure 9 for the total differences (Appendix C) indicate that smoother kernels slightly weaken the local response differences between the potential examples and natural examples layer by layer, and finally tend to narrow the robustness and performance in Figure 5(h). This to some extent explains adversariallytrained models are more sensitive to weight decay (Pang et al., 2021). On the other hand, we find that the increase of weight decay is quite difficult to further reduce the magnitude of parameters and the local response differences in each layer, which may be one of the reasons for the current bottleneck in the robustness of the adversarial training. Besides, the increase of weight decay can effectively weaken the robust overfitting (Rice et al., 2020) in Figure 5(h) and Figure 9(g).
1Note that under the PGD-20 attack, the standard model hardly yields examples of failed attacks, then we use the FGSM attack to illustrate the problem.
4.5 DISCUSSION: LOCAL RESPONSES AND MODEL ROBUSTNESS
The above suggests that the model robustness is related to the model itself and the property of potential examples, while they are combined through the local responses. Due to enough differences in local responses, some potential examples are not regarded as similar to natural examples (or their predictions are different from the natural ones with correct predictions), so they eventually show their destructive effects as adversarial examples. That is, if a model tends to weaken the local response differences of potential and natural examples, then the model exhibits great robustness as the final responses of the potential examples are more likely to be close to the natural ones.
On the other hand, though small differences of local responses make the network more inclined to treat both examples as the same categories, but the existing differences, especially the intricate differences after multi-layer accumulation emphasize the difficulty of approaching model robustness to performance, and demonstrate that DNN models are naturally fragile. Essentially, these response differences come from whether a non-zero convolution kernel acts on all legal perturbations in boundary B to obtain differences that are all zeros or sufficiently small without further accumulation. In other words, it tells the model robustness is that, given legal potential examples from any attacks, the accumulated response differences can be alleviated or removed. For instance, feature denosing (Xie et al., 2019) and activation suppressing (Bai et al., 2021) can be viewed as shortening local response differences of both examples to improve the model robustness.
Besides, shortening local response differences does not directly mean an improvement of the model’s robustness, in fact, it expresses the closeness of the model’s cognition on both examples. That is, a poorly-trained model may also treat both as relatively close, but give a bad model performance. To further improve the model robustness, a more realistic idea is to find proper parameters (whether theoretical parameters to minimize the local response differences exist) or a new method (nonlinear layers, loss functions, model structures) to shorten the response differences as much as possible, while not overly weaken the classification performance of the model.
5 CONCLUSION
In this paper, we investigate the local properties of adversarial examples generated by different models and rethink the vulnerability of DNNs from a novel locally intermediate response perspective. We find that the high-frequency components of adversarial examples tend to mislead standard DNNs, but have little impact on adversarially-trained models. Furthermore, locally-disordered perturbations are shown on standard models, but locally-consistent perturbations on adversarially-trained models. Both explorations emphasize the local perspective and the potential relationship between models and adversarial examples, then we explore how different local perturbations affect the models. We demonstrate DNN models are naturally fragile at least for large enough local response differences between potentially adversarial examples and natural examples, and empirically show smoother adversarially-trained models can alleviate local response differences to improve robustness.
A THE DESTRUCTIVENESS OF ADVERSARIAL EXAMPLES ON FREQUENCY DOMAIN
Here, we further investigate the contribution of only high-frequency components to the destructiveness of both examples. We get both standard and adversarially-trained models as above. Figure 6 illustrates the trend of model performance and robustness on the test set with the low-frequency components decreased (the increase of the filtering scale denotes that the less low-frequency components are added to the filtered images). As the increase of filtering scale, only natural examples on standard models keep certain classification performance and then decrease to reach 10% (the accuracy of random classification). That is, the high-frequency components of natural examples to some extent can promote classification. On the other hand, the high-frequency components of adversarial examples on standard models show almost no promotion but destructive effect, and finally reach 10% as well. For robust models, the high-frequency components of both examples show similar but little performance.
B THE LOCAL PROPERTIES OF ADVERSARIAL EXAMPLES ON SPATIAL DOMAIN
We further explore the local properties of adversarial examples generated by the FGSM attack. For adversarially-trained models, compared with the PGD attack in Figure 3, the adversarial examples generated by the FGSM attack in Figure 7 show similar perturbations and similar model robustness (51.9% robustness for PGD-20 and 57.4% robustness for FGSM), yet the latter produces less detailed perburbations. For standard models, though both attacks show locally-disordered perturbations, the latter exhibits less disordered perturbations since less perturbation values can be achieved, i.e., +8/255, 0, and -8/255.
Similar to adversarial examples of successful attacks in Figure 3, the failed attacks in Figure 8 show locally-consistent perturbations related to image shapes on the adversarially-trained models as well. The difference is that, these examples show less misleading local perturbations since the shape of their perturbations is closer to the shape of the original image.
C LOCAL RESPONSES
As mentioned in Section 4.3, we report destructive effects on the adversarially-trained model in Table 4. We first select the natural examples that are classified correctly, and count their potential examples of successful and failed attacks. Similar to the standard model, the successful ones show similar differences with the failed ones in the front layers, but greater differences in Layer 4 close to the outputs. The difference is that, due to the smoother kernels and locally smoother perturbations, the robust model exhibits a closer local response differences in Layer 4 between the successful ones and the failed ones. Besides, compared with the PGD attack, the adversarial examples generated by the FGSM attack show similar differences in the front layers, but less local response differences in Layer 4, leading to higher model robustness.
We further report the transferability of adversarial examples on the standard model, as shown in Table 5. We get the potentially adversarial examples obtained from the robust model to attack the standard model. That is, locally-consistent perturbations, combined with a non-smooth standard model, exhibit small local response differences layer by layer, making the model’s cognition close enough to the natural examples and leading to 78.4% robustness close to 94.4% model performance.
As mentioned in Section 4.4, Figure 9 is a supplement to Figure 5 from the total absolute local response differences perspective, which indicates that smaller local response differences tend to approach model robustness to performance as well. | 1. What is the main contribution of the paper regarding the study of adversarial examples and deep neural networks?
2. What are the strengths of the proposed approach, particularly in terms of its ability to provide insights into the properties of adversarial examples?
3. Do you have any concerns or questions about the paper's content, such as confusing statements or ad hoc arguments?
4. How does the reviewer assess the novelty and significance of the paper compared to prior works in the field?
5. Are there any issues with the presentation of the results, such as the lack of clarity regarding the averaging process or the absence of a detailed explanation of how the results differ from previous studies? | Summary Of The Paper
Review | Summary Of The Paper
This paper provides a set of empirical studies of the spectral and spatial properties of adversarial examples of deep neural nets (DNNs) classifiers. The studies illustrate that standard DNNs are much more sensitive to high frequency components of adversarial examples compared to adversarially-trained DNNs, and also that the adversarial examples corresponding to the latter exhibit more local consistency in the spatial domain. The paper then connects the effectiveness of an adversarial example to the concept of inducing larger differences in the local response of DNN layers compared to the corresponding natural example, denoted local response differences, and justifies this connection by empirically showing that: adversarially-trained DNNs have smaller local response differences on adversarial examples compared to standard DNNs; the smoother the DNN the better its robustness; successful attacks (adversarial examples) induce larger local response differences; and finally that adversarial examples transferred from other models, hence weaker examples, also induce smaller local response differences.
Review
Strength:
The paper provides interesting experiments that can potentially contribute to the understanding of the community about what properties in adversarial examples enables them to sabotage DNNs, and how adversarial-training counteracts these properties. It also suggests interesting connections between the concept of local response differences and previously observed qualities of adversarial examples.
Concerns and Questions:
1- A major concern with the paper is that it does not read well, contains many confusing statements, and as such I think it needs a major editorial fix.
2- There are also several ad hoc statements and arguments that must either be carefully justified mathematically or empirically or be properly referenced (for example this statement from section 4.2: “On the other hand, suppose smoother kernels with relatively small variance, these locally-disordered perturbations tend to show smaller absolute response differences since smoother kernels tend to focus on the average situation within their local receptive fields” or from section 3.4: “That is, for a standard model with non-smooth kernels, the model itself tends to amplify the local response differences between potential examples and natural examples” or from section 4.5: “On the other hand, though small differences of local responses make the network more inclined to treat both examples as the same categories, but the existing differences, especially the intricate differences after multi-layer accumulation emphasize the difficulty of approaching model robustness to performance, and demonstrate that DNN models are naturally fragile”
3- The paper does not seem to add much more over the known results from prior works, for example Tsipras et al. (2019) has showed the difference between adversarial examples of adversarially-trained vs. standard DNNs, and the presence of spatially locally consistent patterns in the former, and high frequency patterns in the latter (Figure 3 and 2 of that paper); and Wang et al. (2020a) show the smoothness of the learnt kernel in adversarially-trained DNNs. A more detailed explanation of how this paper differs from these prior works, and what aspects of this work are novel and cannot be readily inferred from the prior works, would really help highlight the importance and relevance of this work.
4- Are the results reported in Table 1,2,3 averaged? Over how many samples? What is the corresponding variance for each value?
5- In section 4.4, how is the following statement inferred from Figure 5h: “and finally tend to narrow the robustness and performance in Figure 5(h)”? I cannot see this narrowness. (also the legends in Figure 5(h) must differentiate between plots with the same color). |
ICLR | Title
Dissecting Local Properties of Adversarial Examples
Abstract
Adversarial examples have attracted significant attention over the years, yet a sufficient understanding is in lack, especially when analyzing their performances in combination with adversarial training. In this paper, we revisit some properties of adversarial examples from both frequency and spatial perspectives: 1) the special high-frequency components of adversarial examples tend to mislead naturallytrained models while have little impact on adversarially-trained ones, and 2) adversarial examples show disorderly perturbations on naturally-trained models and locally-consistent (image shape related) perturbations on adversarially-trained ones. Motivated by these, we analyze the fragile tendency of models with the generated adversarial perturbations, and propose a connection with model vulnerability and local intermediate response. That is, a smaller local intermediate response comes along with better model adversarial robustness. To be specific, we demonstrate that: 1) DNNs are naturally fragile at least for large enough local response differences between adversarial/natural examples, 2) and smoother adversariallytrained models can alleviate local response differences with enhanced robustness.
1 INTRODUCTION
Despite deep neural networks (DNNs) perform well in many fields (He et al., 2016; Devlin et al., 2019), their counter-intuitive vulnerability attracts increasing attention, both for safety-critical applications (Sharif et al., 2016) and the black-box mechanism of DNNs (Fazlyab et al., 2019). DNNs have been found vulnerable to adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015), where small perturbations on the input can easily change the predictions of a well-trained DNN with high confidence. In computer vision, adversarial examples exhibit their destructiveness both in the digital world and the physical world (Kurakin et al., 2017).
Since then, how to alleviate the vulnerability of DNN so as to narrow the performance gap between adversarial/natural examples is another key issue. Existing methods including defensive distillation (Papernot et al., 2016) and pixel denoising (Liao et al., 2018) have shown their limitations due to follow-up attack strategies (Carlini & Wagner, 2017) or gradient masking (Athalye et al., 2018). Amongst them, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) and its variants (Zhang et al., 2019; Wang et al., 2020b) indicate their reliable robustness and outperform. Moreover, as a data augmentation method, adversarial training currently seems to rely on additional data (Schmidt et al., 2018; Rebuffi et al., 2021) to further improve robustness, while is sensitive to some basic model hyper-parameters, e.g., weight decay (Pang et al., 2021). Apart from these, the effect of simply early stopping (Rice et al., 2020) even exceeds some promotion methods according to recent benchmarks (Croce & Hein, 2020; Chen & Gu, 2020). These studies arise our curiosity to further explore the relationship between adversarial examples and adversarial training, hoping to provide some new understanding.
Recalling that high-frequency components can be potentially linked to adversarial examples (Wang et al., 2020a; Yin et al., 2019; Harder et al., 2021), however, few explorations discuss the relationship between high-frequency components and the destructiveness of adversarial examples. In this paper, we first demonstrate that high-frequency components of adversarial examples tend to mislead the standard DNNs, yet little impact on the adversarially robust models. We further show that adversarial examples statistically have more high-frequency components than natural ones, indicating relatively drastic local changes among pixels of adversarial examples. Since adversarial examples
exhibit more semantically meaningful on robust models (Tsipras et al., 2019), we further notice that adversarial examples show locally-consistent perturbations related to image shapes on adversariallytrained models, in contrast to disorderly perturbations on standard models. Both explorations on the frequency and spatial domain emphasize local properties of adversarial examples, and motivated by the local receptive field of the convolution kernels, we propose a locally intermediate response perspective to rethink the vulnerability of DNNs. Different from the existing global activation perspective (Bai et al., 2021; Xu et al., 2019), our local perspective reflects the joint effect of local features and the intermediate layers of the model. Based on the local perspective, we emphasize that large enough local response differences make it difficult for the network to treat an image and its potentially adversarial examples as one category, and demonstrate DNN models are naturally fragile at least attributed to it. Motivated by adversarially-trained models tend to have ‘smooth’ kernels (Wang et al., 2020a), we simply use the smoother kernels to alleviate local response differences on adversarially-trained models, which in turn affects the model robustness and reduces the robust overfitting (Rice et al., 2020). To a certain extent, this explains why weight decay effectively affects model robustness (Pang et al., 2021).
Our main contributions are summarized as follows:
• We first reveal some properties of adversarial examples in the frequency and spatial domain: 1) the high-frequency components of adversarial examples tend to mislead naturally-trained DNNs, yet have little impact on adversarially-trained models, and 2) adversarial examples have locally-consistent perturbations on adversarially-trained models, compared with disorderly local perturbations on naturally-trained models.
• Then we introduce local response and emphasize its importance in the model adversarial robustness. That is, naturally-trained DNNs are often fragile, at least for non-ignorable local response differences through the same layer between potentially adversarial examples and natural ones. In contrast, adversarially-trained models effectively alleviate the local response differences. And the smoother adversarially-trained models show better adversarial robustness as they can reduce local response differences.
• Finally we empirically study local response with generated adversarial examples. We further show that, compared with failed attacks, adversarial examples (successful attacks) statistically show larger local response differences with natural examples. Moreover, compared with adversarial examples generated by the model itself, those transferred by other models show markedly smaller local response differences.
2 RELATED WORK
Understandings of model vulnerability. Since the discovery of adversarial examples (Szegedy et al., 2014), a number of understandings on model vulnerability has been developed. For instance, linear property of DNNs (Goodfellow et al., 2015), submanifold (Tanay & Griffin, 2016) and geometry of the manifold (Gilmer et al., 2018) were considered from the high-dimensional perspective; the computing of Lipschitz constant (Szegedy et al., 2014; Fazlyab et al., 2019) and lower/upper bounds (Fawzi et al., 2018; Weng et al., 2018) were considered from the definition of model robustness; non-robust features (Ilyas et al., 2019), high-frequency components (Wang et al., 2020a; Yin et al., 2019), high-rank features (Jere et al., 2020) and high-order interactions among pixels (Ren et al., 2021) were explored adversarial examples from the different perspectives on images, which imply our local perspective; feature denosing (Xie et al., 2019), robust pruning (Madaan & Ju Hwang., 2020) and activation suppressing (Bai et al., 2021) were focused on global intermediate activations of models. On the other hand, taken adversarial training into consideration, Tsipras et al. (2019), Schmidt et al. (2018) and Zhang et al. (2019) explored the trade-off between robustness and accuracy; Wang et al. (2020a) found adversarially-trained models tend to show smooth kernels; Tsipras et al. (2019) and Zhang & Zhu (2018) argued adversarially robust models learned more shape-biased representations.
Different from these studies, we characterize the adversarial examples from both frequency and spatial domain to emphasize local properties of adversarial examples, and propose a locally intermediate response to rethink the vulnerability of DNNs.
Adversarial training. Adversarial training can be seen as a min-max optimization problem:
min θ
1
n n∑ i=1 max x′i∈B(x) L(fθ(x ′ i), yi)
where f denotes a DNN model with parameters θ, and (xi, yi) denotes a pair of a natural example xi and its ground-truth label yi. Given a classification loss L, the inner maximization problem can be regarded as searching for suitable perturbations in boundary B to maximize loss, while the outer minimization problem is to optimize model parameters on adversarial examples {x′i}ni=1 generated from the inner maximization.
3 DIFFERENT PERTURBATIONS AFFECT THE VULNERABILITY OF THE MODEL
In this section, we first investigate adversarial examples in the frequency and spatial domain, and show the connections between models and their adversarial examples. Note that our threat model is a white-box model, and the fact that the adversarial example is only defined by the misleading result under the legal threat model, our findings suggest that the properties of the sampled examples can broadly reflect the fragile tendency of the model.
Setup. We generate `∞ bounded adversarial examples by PGD-10 (maximum perturbation = 8/255 and step size 2/255) with random start for the robust model (Madry et al., 2018). Specifically, we use ResNet-18 (He et al., 2016) as the backbone to train the standard and adversarially-trained models for 100 epochs on CIFAR-10 (Krizhevsky, 2009). Following (Pang et al., 2021), we use the SGD optimizer with momentum 0.9, weight decay 5 × 10−4 and initial learning rate 0.1, with a three-stage learning rate divided by 10 at 50 and 75 epoch respectively. The robustness of both models is evaluated by PGD-20 (step size 1/255).
3.1 THE DESTRUCTIVENESS OF HIGH-FREQUENCY COMPONENTS FROM ADVERSARIES
Inspired by (Wang et al., 2020a), we are naturally curious whether the adversarial examples cause considerable damage to the model mainly because of their high-frequency components. To answer this question, Figure 1 illustrates the trend of model performance and robustness on the test set with the high-frequency components increased (the increase of the filtering scale denotes that more highfrequency components are added to the filtered images). Figure 1(a) shows that, for standard models, the high-frequency components of natural examples promote classification and reach the model performance (green line); on the contrary, the performance of the filtered adversarial examples first rises to get the highest accuracy 47.5%, and then drops rapidly to reach 0.0% (red line). Obviously, in the low-frequency range, the performance of natural and adversarial examples are quite close, yet more high-frequency components widen the difference. That is, the special high-frequency components caused by adversarial perturbations exhibit a clear destructive effect on standard models, and simply filter out them can effectively alleviate the destructiveness of adversaries even on standard models.
However, for robust models, we show that the prediction performance finally reaches robustness without a rapid drop in Figure 1(b). But surprisingly, we find that the performance of filtered adversarial examples in some range exceeds the final robustness 47.5% (red line), reaching a maximum of 51.2%. That is, although these high-frequency components do not exhibit a clear destructive effect, simply filtering out them has a positive impact on alleviating robust overfitting (Rice et al., 2020).
We then swap their high-frequency components between both examples controlled by a frequency threshold in Figure 1(c)-1(d). For merged natural examples with high-frequency components from adversaries, the increase of the frequency threshold controls the accuracy increasing from the model robustness (red line) to the model performance (green line), the opposite occurs on merged adversarial examples. These clearly illustrate the boost effect of the high-frequency components from natural examples and the destructive effect of the high-frequency components from adversarial examples.
To further illustrate, we find that statistically, the main difference in the frequency domain between natural and adversarial examples is concentrated in the high-frequency region in Figure 2. We visualize the logarithmic amplitude spectrum of both examples. Figure 2(a) and 2(b) show that, compare with natural examples’, the high-frequency components of the adversaries are hard to ignore. Figure 2(c) further emphasizes that adversarial examples markedly show more high-frequency components, indicating relatively drastic local changes among pixels. This statistical difference explains the high detection rate of using Magnitude Fourier Spectrum (Harder et al., 2021) to detect adversarial examples. Furthermore, Figure 2(d) and 2(e) show that the high-frequency components of adversarial examples generated by robust models are less than those from standard models, yet still more than natural examples’. Besides, the analysis of filtering out low-frequency components in Figure 6 (Appendix A) also emphasizes our statement. That is, compared with natural examples’, the special high-frequency components of adversarial examples show their serious misleading effects on standard models, yet are not enough to be fully responsible for the destructiveness to robust models.
3.2 DIFFERENT PERTURBATIONS AND THE FRAGILE TENDENCY OF THE MODEL
To answer the different effects of high-frequency components, we directly explore the adversarial examples themselves, e.g., in the spatial domain. We first get well-trained models as above and visualize adversarial perturbations in Figure 3, including the overall perturbations scaled to [0,255], the average perturbations of channels, and perturbations of three channels.
As shown in Figure 3(Right), the perturbations generated by the standard models tend to be locally inconsistent and disordered. This observation is different from Figure 3(Left) that adversarial examples show locally-consistent perturbations related to image shapes on the adversarially-trained models, that is, perturbations tend to be locally co-increasing or co-decreasing on each channel. In more detail, the perturbation of each pixel on a single channel tend to reach the perturbation bound, i.e., +8/255 (the reddest), -8/255 (the bluest), which is counter-intuitive under the iterative attack of PGD-20 and naturally associated with the one-step attack FGSM in Figure 7 (Appendix B). In fact, both attacks show similar perturbations and similar model robustness1, while the former produces more detailed perturbations. Besides, compared with failed attacks in Figure 8 (Appendix B), adversarial examples of successful attacks in Figure 3 show more misleading local perturbations, e.g., the perturbations in the first row are more like a bird (bird wings are added), and the third like a car.
Perturbations in frequency vs. spatial domain. Similar to the different destructive effects of high-frequency components, the perturbations in the spatial domain exhibit a more intuitive difference related to the models. Since more high-frequency components indicate images have relatively drastic local changes, we count that adversarial examples generated by adversarially-trained models show fewer high-frequency components mainly due to their locally-consistent perturbations. These perturbations with smooth local changes imply that simply filtering out high-frequency components has little promotion on robust models, while locally-disordered perturbations imply the effectiveness of filtering out high-frequency components on standard models.
Perturbations vs. fragile trend of models. For a given model, optimization-based attacks attempt to search for special perturbations to maximize the classification loss of adversarial examples. We note that for adversarially-trained models with smooth kernels, the perturbations that tend to maximize loss exhibit a locally-consistent tendency, and for standard models with non-smooth kernels, the perturbations tend to be locally-disordered to maximize loss. This implies a potential connection between the fragile trend of models and their potentially adversarial examples.
4 LOCAL RESPONSES: FURTHER IMPACT OF PERTURBATIONS ON MODELS
In this section, motivated by local properties of adversarial examples and the local receptive field of convolution kernels, we first introduce a locally intermediate response perspective to rethink the vulnerability of the model, and then empirically show the relationship between local responses and the vulnerability of the model.
Different from the existing idea of searching for adversarial examples, we consider under what circumstances the potential examples, refer to all legal variants in boundaryB regardless of whether destructive or not, would show their destructive effects on a well-trained model, that is, the reason why some macro-similar potential examples present prediction results far away from their natural examples. Inspired by local properties of adversarial examples and kernels, we naturally consider whether the destructiveness of potential examples can be viewed from local responses, which reflects the combined effect of the local features and the model property.
Note that under ideal circumstances, given a well-trained model, if the local responses of any examples on a certain layer are completely consistent, then the final predictions of these examples are exactly the same. To relax the condition in reality, we hypothesize as follows (referred to as Assumption 1): If macro-similar features through the same layer exhibit a sufficiently small difference in local responses, with sufficiently small differences in the subsequent layers, then the final responses of the network are relatively close; otherwise, the large enough local differences make the network difficult to treat them as the same category. To illustrate this, we take the convolutional layer as an example.
4.1 LOCAL RESPONSES
Macroscopically, we denote a DNN model f for classification, the l-th layer of activation feature maps f l, and f0 to represent the input features. Macro-similar image and its potential example pass through the l-th layer to get f l and f̂ l respectively. We also denote the (l+1)-th weight com-
1The adversarially-trained model (the best checkpoint) reaches 51.9% robustness on the PGD-20 attack and 57.4% robustness on the FGSM attack.
ponent M l+1 ∈ RH×W×K , where H , W , K represent height, weight and number of kernels respectively. From the locally intermediate response perspective, we first get one of the convolution kernelsml+1 ∈ RH×W , and then capture the l-th layer of local features centered at (i,j) position xli,j and x̂li,j corresponding to its local receptive field. Formally, for the difference of local responses,
∆l+1i,j := x̂ l i,j ⊗ml+1 − xli,j ⊗ml+1 = xdl ⊗ml+1
where ∆l+1i,j denotes the (i,j)-th response difference of local features through the same kernel m l+1, and xdl := x̂li,j−xli,j denotes the difference of local feature maps between its potential example and natural example. The second equation is based on the linear property of the convolution operation.
Ideally, if the differences are all equal to zeros at any positions in the (l+1)-th layer, that is, the intermediate layer has the same utility for both examples, then the potential example after the (l+1)th layer shows exactly the same responses as the natural example’s. However, in most cases the local differences are difficult to be exactly zeros everywhere, then based on Assumption 1, our aim is to make the absolute difference of local responses |∆l+1i,j | as small as possible to approach the model’s cognition of potential example and natural example.
The difference of local responses ∆l+1i,j composed of xd l and ml+1 can be further expressed as:
xd l ⊗ml+1 = H∑ i=1 W∑ j=1 clijm l+1 ij
where clij and m l+1 ij represent the (i,j)-th elements of xd l and ml+1 respectively, thus the absolute local difference |∆l+1i,j | is affected jointly by both local features and kernel parameters. Note that what we care about is under what circumstances are more likely to produce large enough response differences. Though the specific impact of clij and m l+1 ij on ∆ l+1 i,j is quite complicated, we consider statistical circumstances as numerous convolution kernels and various local features are combined to affect the response differences in real networks. Statistically, the larger amplitude of ml+1 with fixed xdl, or larger amplitude of xdl with fixed ml+1, tends to produce larger |∆l+1i,j |. Besides, if a kernel ml+1 is relatively non-smooth, then non-smooth local features xdl , rather than smooth local features, are more likely to produce large enough local response differences.
4.2 FURTHER IMPACT OF LOCAL PERTURBATIONS ON MODELS
Recalling that local properties of adversarial examples in Section 3, we further explore how different local perturbations affect the models from the locally intermediate response perspective. To illustrate the effect of perturbations, we assume two convolution kernels with the same mean but different variances, one is disordered, and the other is smoother.
Local responses vs. locally-disordered perturbations. Since adversarial perturbations generated by standard models tend to be locally-disordered, we discuss these perturbations affect the local responses. Locally-disordered perturbations indicate relatively drastic local changes among pixels of potential example, in other words, local input difference xd0 of potential example and natural example (also refers to local perturbations) show its great variance. These perturbations are convolved on numerous kernels. As shown in Figure 4, suppose convolution kernels with the same mean but greater variance, these locally-disordered perturbations are more likely to yield large enough absolute response differences |∆1i,j | in the first layer, since the non-smooth kernels tend to emphasize the relationship between different pixels. On the other hand, suppose smoother kernels with relatively small variance, these locally-disordered perturbations tend to show smaller absolute response differences since smoother kernels tend to focus on the average situation within their local receptive fields. That is, locally-disordered perturbations pass through different types of convolution kernels tend to produce different response differences, which can be accumulated in the subsequent layers.
Local responses vs. locally-consistent perturbations. Since adversarial perturbations generated by adversarially robust models tend to be locally-consistent, we count that such perturbations are more destructive to smoother kernels, comparing with locally-disordered perturbations in Figure 4. Locally-consistent perturbations xd0 reaching the perturbation bound tend to produce larger absolute response differences in the first layer since they tend to show a larger absolute average, yet locallydisordered perturbations show smaller response differences due to a smaller absolute average.
Both different local perturbations have different impacts on the local response differences of potential examples and natural examples, implying the fragile trend of models and the transferability of adversarial examples.
4.3 EMPIRICAL UNDERSTANDING OF LOCAL RESPONSES
Motivated by adversarially-trained models that tend to show smoother kernels (Wang et al., 2020a), we first provide a further understanding of local responses and then give empirical understandings.
Local responses vs. model smoothness. Given a non-smooth convolution kernel ml+1 with great variance, it is more likely to cause huge impact on an absolute response difference |∆l+1i,j | within its local receptive field when fixing xdl. Considering numerous non-smooth kernels and various local features are combined in the same layer, it is quite easy to yield large enough response differences. More complicated is the subsequent layers act on the current local response differences and get intricate accumulation effects. That is, for a standard model with non-smooth kernels, the model itself tends to amplify the local response differences between potential examples and natural examples. On the other hand, since an adversarially-trained model show smoother kernels, the model has a tendency to weaken the local response differences and narrow the final responses of potential examples and natural examples, indicating a trade-off between model robustness and accuracy.
Setup. We get both standard and adversarially-trained models in Section 3. Considering complex functional layers, including linear and nonlinear ones in DNNs, based on Assumption 1, we use the maximum and the total absolute differences of local responses in some layers to show their effects. Taking ResNet-18 as an example, we investigate some layers including the first convolutional layer as Conv 1, the feature maps before layer 1 as Layer 0, and Layer 1 to Layer 4 respectively1.
To verify above analysis, we first compare the local response differences of potential examples and natural examples between standard and robust models. As Table 1 shows, the standard model exhibits significantly larger differences in local responses layer by layer, especially from the perspective of total differences yielded by per pairs. These large enough response differences accumulate, making the model’s cognition of potential examples far away from their natural ones and ultimately leading the model to express a high vulnerability. On the other hand, the adversarially-trained model significantly shortens local response differences in corresponding layers, yet still some differences exist. In other words, how to further reduce the local response differences is a perspective of approaching model robustness to performance. We also note that, due to the complex effect of nonlinear layers (e.g., BatchNorm, ReLu), the maximum absolute difference of local responses does not strictly increase monotonically but increases in trend.
1Feature map size of Conv 1, Layer 0 and Layer 1: 64 × 32 × 32, Layer 2: 128 × 16 × 16, Layer 3: 256× 8× 8, and Layer 4: 512× 4× 4.
Local responses vs. destructive effect. Naturally, we wonder whether a clear difference between potential examples of successful and failed attacks on a certain model exists. We first select the natural examples that are classified correctly, and count their potential examples of successful and failed attacks. As Table 2 shows1, for the standard model, the successful ones show similar differences with the failed ones in the front layers, but greater differences especially in Layer 4 close to the outputs, leading to finally destructive effects. However, for the adversarially-trained model in Table 4 (Appendix C), similar differences even in Layer 4 may be due to the smoother kernels and locally smoother perturbations, leading to the final responses of both close to natural examples’.
Local responses vs. transferability. Different local perturbations related to models imply the difficulty of adversarial examples’ transferability, then we further explore whether the transferability can be understood from the locally intermediate response perspective. To verify the analysis of Section 4.2, we exchange the adversaries obtained from the standard and robust model. Table 3 shows, locally-disordered perturbations, combined with a smoother adversarially-trained model, exhibit fairly small local response differences layer by layer, making the model’s cognition close enough to the natural examples and leading to 83.6% robustness close enough to 84.9% model performance. Similar situations occur when locally-consistent perturbations combined with a non-smooth standard model in Table 5 (Appendix C). These results indicate that the searched perturbations related to models tend to amplify response differences as much as possible to enlarge the model’s cognition of potential examples and natural examples, and the weak transferability of adversarial examples may be due to transferred perturbations that are difficult to enlarge the model’s cognition.
4.4 SMOOTHER KERNELS: ALLEVIATE LOCAL RESPONSE DIFFERENCES
To further exhibit the effect of shortening local response differences, we simply show smoother adversarially robust models can alleviate local response differences and then improve their robustness. As shown in Figure 5, adversarially-trained models with different smoothness (i.e., larger weight decay parameters show the smaller magnitude of the kernels (Loshchilov & Hutter, 2019)) show different local response differences. Figures 5(b)-5(g) for the maximum differences and Figure 9 for the total differences (Appendix C) indicate that smoother kernels slightly weaken the local response differences between the potential examples and natural examples layer by layer, and finally tend to narrow the robustness and performance in Figure 5(h). This to some extent explains adversariallytrained models are more sensitive to weight decay (Pang et al., 2021). On the other hand, we find that the increase of weight decay is quite difficult to further reduce the magnitude of parameters and the local response differences in each layer, which may be one of the reasons for the current bottleneck in the robustness of the adversarial training. Besides, the increase of weight decay can effectively weaken the robust overfitting (Rice et al., 2020) in Figure 5(h) and Figure 9(g).
1Note that under the PGD-20 attack, the standard model hardly yields examples of failed attacks, then we use the FGSM attack to illustrate the problem.
4.5 DISCUSSION: LOCAL RESPONSES AND MODEL ROBUSTNESS
The above suggests that the model robustness is related to the model itself and the property of potential examples, while they are combined through the local responses. Due to enough differences in local responses, some potential examples are not regarded as similar to natural examples (or their predictions are different from the natural ones with correct predictions), so they eventually show their destructive effects as adversarial examples. That is, if a model tends to weaken the local response differences of potential and natural examples, then the model exhibits great robustness as the final responses of the potential examples are more likely to be close to the natural ones.
On the other hand, though small differences of local responses make the network more inclined to treat both examples as the same categories, but the existing differences, especially the intricate differences after multi-layer accumulation emphasize the difficulty of approaching model robustness to performance, and demonstrate that DNN models are naturally fragile. Essentially, these response differences come from whether a non-zero convolution kernel acts on all legal perturbations in boundary B to obtain differences that are all zeros or sufficiently small without further accumulation. In other words, it tells the model robustness is that, given legal potential examples from any attacks, the accumulated response differences can be alleviated or removed. For instance, feature denosing (Xie et al., 2019) and activation suppressing (Bai et al., 2021) can be viewed as shortening local response differences of both examples to improve the model robustness.
Besides, shortening local response differences does not directly mean an improvement of the model’s robustness, in fact, it expresses the closeness of the model’s cognition on both examples. That is, a poorly-trained model may also treat both as relatively close, but give a bad model performance. To further improve the model robustness, a more realistic idea is to find proper parameters (whether theoretical parameters to minimize the local response differences exist) or a new method (nonlinear layers, loss functions, model structures) to shorten the response differences as much as possible, while not overly weaken the classification performance of the model.
5 CONCLUSION
In this paper, we investigate the local properties of adversarial examples generated by different models and rethink the vulnerability of DNNs from a novel locally intermediate response perspective. We find that the high-frequency components of adversarial examples tend to mislead standard DNNs, but have little impact on adversarially-trained models. Furthermore, locally-disordered perturbations are shown on standard models, but locally-consistent perturbations on adversarially-trained models. Both explorations emphasize the local perspective and the potential relationship between models and adversarial examples, then we explore how different local perturbations affect the models. We demonstrate DNN models are naturally fragile at least for large enough local response differences between potentially adversarial examples and natural examples, and empirically show smoother adversarially-trained models can alleviate local response differences to improve robustness.
A THE DESTRUCTIVENESS OF ADVERSARIAL EXAMPLES ON FREQUENCY DOMAIN
Here, we further investigate the contribution of only high-frequency components to the destructiveness of both examples. We get both standard and adversarially-trained models as above. Figure 6 illustrates the trend of model performance and robustness on the test set with the low-frequency components decreased (the increase of the filtering scale denotes that the less low-frequency components are added to the filtered images). As the increase of filtering scale, only natural examples on standard models keep certain classification performance and then decrease to reach 10% (the accuracy of random classification). That is, the high-frequency components of natural examples to some extent can promote classification. On the other hand, the high-frequency components of adversarial examples on standard models show almost no promotion but destructive effect, and finally reach 10% as well. For robust models, the high-frequency components of both examples show similar but little performance.
B THE LOCAL PROPERTIES OF ADVERSARIAL EXAMPLES ON SPATIAL DOMAIN
We further explore the local properties of adversarial examples generated by the FGSM attack. For adversarially-trained models, compared with the PGD attack in Figure 3, the adversarial examples generated by the FGSM attack in Figure 7 show similar perturbations and similar model robustness (51.9% robustness for PGD-20 and 57.4% robustness for FGSM), yet the latter produces less detailed perburbations. For standard models, though both attacks show locally-disordered perturbations, the latter exhibits less disordered perturbations since less perturbation values can be achieved, i.e., +8/255, 0, and -8/255.
Similar to adversarial examples of successful attacks in Figure 3, the failed attacks in Figure 8 show locally-consistent perturbations related to image shapes on the adversarially-trained models as well. The difference is that, these examples show less misleading local perturbations since the shape of their perturbations is closer to the shape of the original image.
C LOCAL RESPONSES
As mentioned in Section 4.3, we report destructive effects on the adversarially-trained model in Table 4. We first select the natural examples that are classified correctly, and count their potential examples of successful and failed attacks. Similar to the standard model, the successful ones show similar differences with the failed ones in the front layers, but greater differences in Layer 4 close to the outputs. The difference is that, due to the smoother kernels and locally smoother perturbations, the robust model exhibits a closer local response differences in Layer 4 between the successful ones and the failed ones. Besides, compared with the PGD attack, the adversarial examples generated by the FGSM attack show similar differences in the front layers, but less local response differences in Layer 4, leading to higher model robustness.
We further report the transferability of adversarial examples on the standard model, as shown in Table 5. We get the potentially adversarial examples obtained from the robust model to attack the standard model. That is, locally-consistent perturbations, combined with a non-smooth standard model, exhibit small local response differences layer by layer, making the model’s cognition close enough to the natural examples and leading to 78.4% robustness close to 94.4% model performance.
As mentioned in Section 4.4, Figure 9 is a supplement to Figure 5 from the total absolute local response differences perspective, which indicates that smaller local response differences tend to approach model robustness to performance as well. | 1. What are the main contributions of the paper regarding adversarial examples?
2. Are there any concerns about the novelty of the paper's findings compared to prior works?
3. Do you have any questions or suggestions for improving the research questions and experimental setup?
4. How can the paper provide more valuable insights into the properties of adversarial examples? | Summary Of The Paper
Review | Summary Of The Paper
This paper investigates the properties of adversarial examples from frequency and spatial perspectives and claims that standard models are vulnerable to high-frequency perturbations and adversarially robust models are vulnerable to locally consistent perturbations. In addition, this paper investigates the relation between local intermediate response and adversarial robustness.
Review
The empirical findings of the paper seem to have already been known broadly. In Section 3.1, the paper shows the difference between vulnerabilities of standard training and adversarial training in the frequency domain. However, similar analyses have already been shown in (Yin et al., 2019, Wang et al., 2020a, Tsuzuku and Sato 2019). Similarly, the difference of vulnerabilities in the spatial domain has been shown in (Tsipras et al. 2019). Thus, the contributions in Section 3 are weak.
Section 4 discusses the relation between local responses and locally-disordered perturbations, locally consistent perturbation, and smoothness. However, since several terms are not well defined mathematically, the claims in this section might be ambiguous and it is difficult to obtain useful insights. For example, what is the rigorous difference between locally-disordered perturbations are locally-consistent perturbations? In addition, I think the experimental setup is not very sophisticated to answer the research question.
To obtain more useful insights, it might be helpful to narrow down the research questions and experimental setup and to develop new analyses based on the previous studies.
(Tsuzuku and Sato 2019) "On the structural sensitivity of deep convolutional networks to the directions of Fourier basis functions." CVPR 2019. |
ICLR | Title
An Intrinsic Dimension Perspective of Transformers for Sequential Modeling
Abstract
Transformers have gained great popularity for sequential modeling, especially in fields such as natural language processing (NLP). Recently, numerous architectures based on the Transformer framework are proposed, leading to great achievements in applications. However, the working principles behind still remain mysterious. In this work, we numerically investigate the geometrical properties of data representation learned by Transformers, via a mathematical concept called intrinsic dimension (ID), which can be viewed as the minimal number of parameters required for modeling. A series of experiments, mainly focusing on text classification tasks, backs up the following empirical claims on relationships among embedding dimension, depth, respective ID per layer and tasks performance. First, we surprisingly observe that a higher ID (of terminal features extracted by Transformers) typically implies a lower classification error rate. This is contrary to that of CNNs (or other models) performed on image classification tasks. In addition, it is shown that the ID per layer tends to decrease as the depth increases, and this reduction usually appears more significant for deeper architectures. Moreover, we give numerical evidence on geometrical structures of data representation learned by Transformers, where only the nonlinear dimension reduction can be achieved. Finally, we explore the effect of sequential lengths on the ID and tasks performance, which guarantees the validity of data reduction in training. We hope that these findings can play a guiding role in hyper-parameters selection and dimension/data reduction for Transformers on text classification and other mainstream NLP tasks.
1 INTRODUCTION
Transformers (Vaswani et al., 2017) have made a great difference in many machine learning fields, particularly leading to significant advances in natural language processing (NLP) and computer vision (CV). It has been shown that the Transformer architecture is capable of handling large-scale datasets, usually with the help of sufficiently many parameters, with bert (Devlin et al., 2018), GPT3 (Brown et al., 2020) and bart (Lewis et al., 2019) as typical examples, and achieves impressive performance: When the Transformer is trained on enough samples, it often outperforms other competing models such as CNNs (Dosovitskiy et al., 2020). As the potential of Transformers is further tapped, a large number of variants of Transformers have emerged. For example, reformer (Kitaev et al., 2020) reduces the original computation complexity from O(L2) to O(L logL) by using locality-sensitive hashing, where L denotes the sequential length. Sparse Transformer (Child et al., 2019) introduces sparse factorizations to reduce the memory cost from O(L2) to O(L logL). Linformer (Wang et al., 2020) uses low-rank matrices to approximate the self-attention mechanism to further reduce both the computational cost and memory cost from O(L2) to O(L logL).
Despite the vigorous development of architectures, the working principles behind Transformers are still mysteries. The Transformer is often hard to train and we still know little about how it works and how the performance changes when the embedding dimension and depth increase. However, clarifying these problems is quite important because people are developing larger and deeper Transformers with additional training techniques to get better performance. Recently, there have been some preliminary works. (Xiong et al., 2020) and (Popel & Bojar, 2018) numerically illustrated the effect of tuning hyper-parameters on training Transformer. (Huang et al., 2020) explores the difficulty of optimizing the Transformer model, and proposes a new initialization method to benefit the
training of deeper Transformers. (Wang et al., 2019) also investigated on how to train huge Transformer models. (Wang et al., 2022) successfully trained a Transformer with a depth of 1000 layers. In this work, we make an initial attempt to clarify the working mechanism of the Transformer from the perspective of the ID of the Transformer representation.
Generally, people realize that the real-world data such as sounds, texts, images, etc, tends to possess some kind of low-dimensional structures. That is, only a small fraction of dimensions is required to characterize sampled data and underlying target relationships. It is reasonable to consider, for example, the dataset formed by all the 224 × 224 × 3 RGB pictures labeled as dogs. There are in principle 224×224×256×3 = 38535168 possibilities, but the “intrinsic” number of pictures of dogs recognized by people is usually much less, where considerable similarities are common. Many algorithms and techniques in deep learning formally exploit the ubiquity of these low-dimensional data structures, such as (Hinton & Salakhutdinov, 2006) and (Gonzalez & Balajewicz, 2018). Intrinsic dimension (ID) (Amsaleg et al., 2015),(Houle et al., 2012),(Cutler, 1993) is an important mathematical tool to characterize the geometrical structure of data. It represents the minimal number of parameters required for modeling certain ground truths, hopefully capturing the low-dimensional data structures. In this work, we mainly investigate the data representation by Transformers to uncover different and interesting phenomena via the concept of ID. Our contributions can be summarized as four aspects:
• We analyze the variation of ID for data representation learned by successive Transformer blocks. It appears a dimension reduction phenomenon across layers, which can be strengthened by deepening the architecture.
• We show the geometrical structures of Transformers for sequential modeling: it seems that Transformers can only achieve nonlinear dimension reduction when applied to text classification tasks.
• We explore the relationship among embedding dimension (ED), intrinsic dimension (ID) of learned representation and tasks performance, which motivates a straightforward interpretation on the benefit of increasing ED from the perspective of ID.
• We investigate the effect of training dataset reduction via sequential lengths, which shows negligible influence on the IDs and tasks performance. This motivates potentially a guidance for efficiency in practical applications.
2 RELATED WORK
TwoNN method. There are lots of works on how to estimate ID of the given datasets. For example, (Fukunaga & Olsen, 1971), (Bruske & Sommer, 1998) are methods based on PCA. (Levina & Bickel, 2004) is a method based on maximum likelihood estimation (MLE). (Costa & Hero, 2004) estimated the ID by using the so-called geodesic-minimal-spanning-tree. (Kégl, 2002) utilized capacity dimension to estimate intrinsic dimension . (Facco et al., 2017) presented an estimation approach named as TwoNN, which takes advantage of the closest and second closest samples to form modeling probability distributions. Considering its computational efficiency, the TwoNN method is adopted in this work to estimate intrinsic dimensions of the representations learned by Transformers.1
Mathematically, the TwoNN has the following procedure. Given the dataset D = {xi}Ni=1. For each data point xi, denote its distance to the closest and second closest sample as si,1 and si,2, respectively. The TwoNN method estimate the ID by modeling the statistics
si := si,2 si,1
.
Actually we can model the statistics si := si,2 si,1 by a Pareto distribution (Hussain et al., 2018),(Rootzén & Tajvidi, 2006) and hence get
1We refer https://github.com/ansuini/IntrinsicDimDeep for some codes.
P ((s1, s2, ..., sN ) | d) = dN N∏ i=1 s −(d+1) i
Here, the intrinsic dimension of the dataset D, namely d, can be easily estimated by a common maximum likelihood method. To be more clearly, we give a pipeline of TwoNN method below.
• Ramdomly select N data point, get D = {xi}Ni=1.
• For each xi in D, calculate the distance of closest and second closest data denoted as si,1 and si,2.
• For each xi in D, calculate the statistics si := si,2si,1 .
• Estimate d in P ((s1, s2, ..., sN ) | d) = dN ∏N i=1 s −(d+1) i by maximum likelihood method
and the d is the estimation of ID.
Intrinsic dimension in deep learning. The performance of intrinsic dimension in deep learning architectures has been also studied recently. For example, (Pope et al., 2021) generated fake images by GAN to control the upper bound of ID in order to verify the accuracy of the ID estimation method. (Ansuini et al., 2019) analyzed variation of ID of the representations cross the layer for some classical neural networks, such as the ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2014), on the image classification tasks. (Aghajanyan et al., 2020) studied the effect of pre-training on the intrinsic dimension for NLP tasks. However, compared to CNNs and other models applied to computer vision tasks, the related research on sequential models for NLP tasks such as Transformers are still limited despite of its great popularity. The current work aims to fill this gap.
3 RESULTS
We first introduce the experiment setup, then report our results in five aspects.
3.1 EXPERIMENT SETTING
Tasks. To explore the ID of Transformers in a convenient manner, we conduct numerical experiments on the text classification task. The reasons are as follows. First, the text classification is a representative but important task in natural language processing, and has wide applications such as spam detection (Crawford et al., 2015), (Asghar et al., 2020), text style classification (Wu et al., 2019), (Sudhakar et al., 2019), sentiment analysis (Medhat et al., 2014), (Xu et al., 2019) and so on. In addition, a great number of works, such as (Devlin et al., 2018), (Wang et al., 2020), (Shaheen et al., 2020) and so on, have shown a great success of the Transformer architecture applied to the classification task. Moreover, people usually use only a series of encoder blocks (with an additional MLP (Rosenblatt, 1961) block for classification) in the framework of Transformer when conducting the text classification, which implies the convenience for ID analysis. As a comparison, for other tasks where decoders are necessary, e.g. the text generation, one may encounter difficulties for the layer-wise ID computation and analysis.
Datasets. The experiments are performed on three datasets: IMDB (Maas et al., 2011), AG (Zhang et al., 2015) and SST2 (Socher et al., 2013). Among them, IMDB (Maas et al., 2011) is a two classification movie review dataset with 25,000 training data and 25,000 test data; AG (Zhang et al., 2015) is a news articles four classification dataset with 120,000 training data and 7,600 test data; SST2 is a two classification movie review dataset with around 7,000 training data and 2,000 test data.
Models. We use the classic Transformer model (Vaswani et al., 2017) with successive encoder blocks to extract features and learn data representation of the full input texts. As a common practice and also for simplicity, pure MLPs (Rosenblatt, 1961) are applied for the terminal classification
layer, which maintains to a large extent the dominating effect on the final performance of the Transformer (encoders).2
Goals and related hyper-parameters. The present work aims to study and analyze the relationship between intrinsic dimensions of the representations learned by Transformers and the corresponding classification performance. To achieve this and conduct more comprehensive and rigorous experiments, we set to vary the following hyper-parameters:
• D: depth, i.e. the total number of layers of the Transformer. • ED: embedding (Mikolov et al., 2013) dimension. For simplicity and by convention (fol-
lowing the classic work (Vaswani et al., 2017)), we set the dimension of each hidden layer to be equal.
That is, the goal is to find out their influences on the ID and classification accuracy. We unfold the analysis from the following aspects.
3.2 THE LAYER-WISE VARIATION OF INTRINSIC DIMENSIONS
We first study the variation of ID across different layers. For a Transformer model with depth D and hidden dimensions {nl}Dl=1, every input data is successively mapped into a nl-dimensional vector space, l = 1, 2, · · · , D. However, the hidden dimension is incapable of characterizing the inherent geometrical structures of data. Here, we use the TwoNN (Facco et al., 2017) method to compute the respective ID per layer.
Following the setting presented in Section 3.1, we perform experiments on three datasets (AG, IMDB and SST2), using the Transformer model with varied depths and embedding dimensions: D ∈ {4, 6, 8}, and ED ∈ {128, 256, 512}. The intrinsic dimension and its variation with respect to different layers are shown in Figure 1. The horizontal axis of Figure 1 represents each layer, where emb and layer i denote the embedding layer and the i-th encoder layer, respectively. The vertical axis shows the corresponding ID. Every single line in Figure 1 represents the layer-wise variation of ID under a certain hyper-parameters configuration.
From Figure 1, one can straightforwardly obtain the following observations:
• With the increase of depth, the overall intrinsic dimension appears a downward trend. Generally, the ID may only increase at the first encoder layer (see Figure 1 (c)), and then decreases through the following layers, and reaches the minimum at the last layer. This
2We also refer https://github.com/lyeoni/nlp-tutorial/tree/master/ text-classification-transformer for some codes.
decrease process of ID across layers can be regarded as a type of dimension reduction for Transformers along with the extraction of effective information for the final classification.
• When fixing the model depth on a certain dataset, we find that the intrinsic dimension basically increases with the embedding dimension. In fact, according to Figure 1, the above conclusion always holds for all depths and datasets. This result is natural at the first glance, since when increasing the embedding dimension, the (initial) intrinsic dimension generally increases as well, which leads to an increment of the terminal ID.
To scope the dimension reduction effect in a detailed manner, we can further check the variation of ratios of intrinsic dimension over embedding dimension and hidden dimensions, which is convenient and straightforward since the last two has been set to be equal. It is shown that the ratio is about O(10−1) at the embedding layer (approximately 0.1-0.3), while it is notably reduced to O(10−2) at the final layer.
3.3 CORRELATIONS BETWEEN INTRINSIC DIMENSIONS AND CLASSIFICATION ACCURACY
Since ID characterizes the intrinsic (geometrical) structures of data distribution, and the classification performance directly depends on the final representation extracted by the last hidden layer of Transformers, it is reasonable to believe that the ID of the last hidden layer (terminal ID) is correlated with the predicted classification accuracy.
Therefore, we numerically investigate the relationship between terminal ID and corresponding classification error rate for various configurations of hyper-parameters and datasets, see Figure 2. For robustness, the training costs several independent runs using randomized initialization to ensure the convergence. The horizontal axis in Figure 2 represents the error rate of classification and vertical axis shows the ID of representation learned by the last hidden layer. The dash lines connect the results for different depths under a fixed embedding dimension, with the stars as mean values for both terminal ID and error rates.
From Figure 2, one can observe that although all experiments are performed under the same type of hypothesis space (i.e. Transformers), there are remarkable differences in term of classification performance and terminal ID along with the model size (the ED herein). As is shown in the solid lines in Figure 2, the terminal ID basically increases with the embedding dimension, while the classification error changes adversely. Interestingly, the uncovered phenomenon for Transformers applied to NLP tasks is opposite to that in (Ansuini et al., 2019), where the terminal ID of CNNs is positively correlated with the classification error rate in image modeling.
The underlying interpretation may be as follows. When the embedding dimension increases, according to the discussion in Section 3.2, we get a larger ID for the representation of input data, resulting in an increment of the terminal ID. Meanwhile, a higher embedding dimension, as well as hidden
dimensions, definitely extend the corresponding hypothesis space for modeling. Hence, it is reasonable to gain a better classification accuracy. Based on this phenomenon, increasing the embedding dimension helps to enhance classification performance via enlarging the intrinsic dimension.
This point may motivate a further extension for practical guidance in applications: one can imagine a principled method to utilize the terminal ID as a posterior “indicator” for generalization. That is, by monitoring the variation of terminal ID during training, we may achieve to guarantee better classification performance without using a completely new dataset for both validation and test. This would benefit a lot when encountering limited data in practical applications and hence deserves to explore in the future work.
Remark 1 Our investigation shows the inherent nature of transformer models on textual tasks. For ViT models, the conclusion conclusion no longer holds as in Table 1. We notice that either the ViT embedding dimension or the ViT depth influence have little effect on ID.
3.4 A PRINCIPAL COMPONENT ANALYSIS VIEWPOINT OF DATA REPRESENTATION
There are many tools to estimate the intrinsic dimension of data representation, such as a series of methods based on principal component analysis (PCA) ((Fukunaga & Olsen, 1971), (Bruske & Sommer, 1998), TwoNN (Facco et al., 2017) and so on). Due to the computational efficiency, the TwoNN algorithm is selected. One can refer to Section 2 for further details.
According to the results shown in Figure 1, we conclude that the intrinsic dimensions of Transformers are much smaller than embedding and hidden dimensions. In this section, we will further illustrate that the data representation learned by Transformers exists on low-dimensional but curved manifolds instead of flat subspaces, hence are incapable of model reduction via linear methods.
To achieve this, we perform the classic PCA (Pearson, 1901) method on the normalized covariance matrix of each layer in Transformers for varied hyper-parameters and all three datasets. Figure 3 shows the results obtained on the last hidden layer. The horizontal axis represents the order of eigenvalues of data representation in a descending sort. The vertical axis shows the value of
corresponding eigenvalues. The green and red vertical dotted lines denote the number of components required to capture 50% and 85% of the variance in data representation. Here, we call the abscissa indicated by the red line as PCA-ID, meaning the “pseudo” intrinsic dimension computed by a direct PCA method.
It is shown that the ID derived from the TwoNN method is much smaller than the PCA-ID. For example, the ID shown in Figure 1 (a) is about 34 for ED = 256 on the AG dataset, while the PCAID shown in Figure 3 (a) is about 180 under the same setting, which is 5-6 times larger. Furthermore, the ratio of PCA-ID with respect to embedding dimension is about 0.7-0.9, which is much larger than that of ID (0.05-0.15). The great difference between ID inferred by TwoNN and PCA method shows the strong nonlinearity in the correlations among data samples. Based on the above results, we conclude that the space where data representation is located is totally not a linear subspace, but a certain curved manifold, which prevents people from performing the basic linear model reduction.
3.5 THE EFFECT OF INTRINSIC DIMENSION REDUCTION WITH RESPECT TO DEPTH
In Transformers as well as other neural network architectures, the model depth always plays an important role. A shallow model may have weak representation ability and poor training performance, while a quite deep model may lead to generalization issues such as overfitting (poor test performance) and requires unacceptable computation and memory cost. In this section, we further investigate the dependence of ID variation on the depth.
According to Figure 1, an ID reduction phenomenon across layers appears. To further track its effect with respect to the model depth, one can naturally focus on the gaps between IDs of embedding layers and last hidden layers. Since the relevant ID results have been already shown in Figure 1, we just summarize them in Table 2, 3 and 4.3 It is straightforward to have the following observations:
• Fix the dataset and embedding dimension, the IDs of embedding layers almost remain the same despite the change of depth.
• Fix the dataset and embedding dimension, the IDs of last hidden layers often decrease significantly with the increase of depth.
3Here, the depth 6 case is not included due to space constraints.
• Combining the above two points gives that, the Transformer model shows a larger dimension reduction across layers for deeper architectures.
A direct explanation for the above phenomena is as follows. When fixing the embedding dimension, the learned representation of input data basically remains accordant, which implies similar IDs of embedding layers. Meanwhile, according to Figure 1 and the discussion in Section 3.2, the ID reduction effect may strengthen through more layers, i.e. deeper models.
3.6 INSTANCES OF DATA REDUCTION: INFLUENCE OF SEQUENTIAL LENGTHS
In practice, the Transformer model usually possesses massive parameters and hence requires large dataset to guarantee reasonable training performance, resulting in enormous space and time costs. Therefore, it would be meaningful if one can reduce the size of training dataset (“data reduction”) without significant damages on the test performance. Motivated by this, we aim to investigate the feasibility and procedure of data reduction in the training of Transformers from the viewpoint of intrinsic dimension (of data representation). It is shown that unlike other hyper-parameters such as the embedding dimension, the intrinsic dimension hardly changes with the sequential length of data samples. This motivates potential chances to achieve data reduction by appropriately discarding training samples.
As an initial attempt, we first focus on the sequential length of samples in the training dataset. Specifically, given a dataset, we first sort all the sentences used for training by their lengths (i.e. word counts, see the length distributions shown in Figure 4), and then form a “long-set” by selecting the top 80% longest sentences in the training dataset. Similarly, one can form a corresponding “shortset” by selecting the top 80% shortest sentences, while the original training dataset without any reduction is called as ”full-set”. In principle, we only remove extreme cases such as too short/long samples, to hopefully avoid affecting the training data and performance. Naturally, the test datasets remain unchanged.
We conduct experiments on these three training sub-datasets: long-set, short-set and full-set, for various configurations of hyper-parameters (mainly embedding dimensions) and datasets. For each
Transformer model trained under the above settings, we train and record the IDs of embedding layers as well as the final classification error rates (on the full test datasets) in Table 5.
There is only a key phenomenon shown in Table 5. That is, if we check the results in Table 5 row by row, both the IDs of embedding layers and classification errors do not change significantly, despite that each Transformer model is trained on different subsets of the original training dataset. This helps to verify the validity of data reduction in training, at least in the aspect of sequential lengths. It would be valuable in applications where sufficient data is unavailable, and hence worthy of exploration in the future work.
4 CONCLUSION
In this work, we propose a new perspective to understand the mechanism of Transformers, which is related to intrinsic dimensions of data representation. Many interesting phenomena are numerically uncovered to reveal the intricate relationships between intrinsic dimensions and task performance, with respect to hyper-parameters such as depths, embedding dimensions of models and sequential lengths of data. We form a series of empirical conclusions. On one hand, for the influence of hyper-parameters on intrinsic dimensions and classification tasks performance, it is shown that there are positive correlations among embedding dimensions, intrinsic dimensions and the classification accuracy. In addition, the intrinsic dimension reduction across layers exists, which can be strengthened by deepening architectures. On the other hand, for the interaction between model and data (i.e. modeling effect), we give numerical evidence that the data representation learned by Transformers lies on curved manifolds. Furthermore, the data reduction in training can be valid, which possibly motivates efficient methods to utilize data and applicable guidance for practical learning. This deserves further exploration in the future. Certainly, the outlook is not limited. We intend to extend the current research on classification to more general settings, particularly on generative tasks. Moreover, the present work focuses on the basic transformer network, and it is also necessary to further investigate the most commonly-used architectures such as typical pre-trained language models. Apart from that, it is our goal to perform quantitative analysis to complete the theoretical gaps of Transformers and intrinsic dimensions herein. | 1. What is the focus of the paper regarding the Transformer encoder?
2. What are the strengths of the paper's contributions to understanding the Transformer encoder?
3. What are the weaknesses of the paper, particularly in terms of comprehensiveness and hypothesis testing?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper investigates the depth and width scaling of the Transformer encoder from the perspective of intrinsic dimension and makes several interesting observations. The authors suggest that ID can be used a metric for measuring generalization performance.
Strengths And Weaknesses
Strengths: Given the popularity of Transformer-based models, research on how these model families work is rather limited. This paper attempts to provide some understanding of the Transformer encoder from an intrinsic dimensional perspective. The observations made by the authors are indeed intriguing.
The authors found that the intrinsic dimension (ID) increases with the embedding dimension and also the hidden dimension increases for Transformers on textual data but not visual data.
The authors pointed out that Transformer encoders are performing non-linear dimension reduction over the layers.
Weaknesses: While this paper provides some interesting research on the Transformers encoder, I believe that these studies are not conducted comprehensively, and that some hypotheses are proposed but not tested.
There is no clear answer as to why there are differences in Transformer models for visual and textual data.
It remains unclear how to select samples for training by exploiting ID.
Clarity, Quality, Novelty And Reproducibility
This paper is well written and easy to understand. The novelty of the paper is moderate. Although the paper exploits the existing methods for estimating ID, it uncovers some new findings in Transformer-based models. The quality of the work can be improved by answering some of the questions raised by the paper more clearly. |
ICLR | Title
An Intrinsic Dimension Perspective of Transformers for Sequential Modeling
Abstract
Transformers have gained great popularity for sequential modeling, especially in fields such as natural language processing (NLP). Recently, numerous architectures based on the Transformer framework are proposed, leading to great achievements in applications. However, the working principles behind still remain mysterious. In this work, we numerically investigate the geometrical properties of data representation learned by Transformers, via a mathematical concept called intrinsic dimension (ID), which can be viewed as the minimal number of parameters required for modeling. A series of experiments, mainly focusing on text classification tasks, backs up the following empirical claims on relationships among embedding dimension, depth, respective ID per layer and tasks performance. First, we surprisingly observe that a higher ID (of terminal features extracted by Transformers) typically implies a lower classification error rate. This is contrary to that of CNNs (or other models) performed on image classification tasks. In addition, it is shown that the ID per layer tends to decrease as the depth increases, and this reduction usually appears more significant for deeper architectures. Moreover, we give numerical evidence on geometrical structures of data representation learned by Transformers, where only the nonlinear dimension reduction can be achieved. Finally, we explore the effect of sequential lengths on the ID and tasks performance, which guarantees the validity of data reduction in training. We hope that these findings can play a guiding role in hyper-parameters selection and dimension/data reduction for Transformers on text classification and other mainstream NLP tasks.
1 INTRODUCTION
Transformers (Vaswani et al., 2017) have made a great difference in many machine learning fields, particularly leading to significant advances in natural language processing (NLP) and computer vision (CV). It has been shown that the Transformer architecture is capable of handling large-scale datasets, usually with the help of sufficiently many parameters, with bert (Devlin et al., 2018), GPT3 (Brown et al., 2020) and bart (Lewis et al., 2019) as typical examples, and achieves impressive performance: When the Transformer is trained on enough samples, it often outperforms other competing models such as CNNs (Dosovitskiy et al., 2020). As the potential of Transformers is further tapped, a large number of variants of Transformers have emerged. For example, reformer (Kitaev et al., 2020) reduces the original computation complexity from O(L2) to O(L logL) by using locality-sensitive hashing, where L denotes the sequential length. Sparse Transformer (Child et al., 2019) introduces sparse factorizations to reduce the memory cost from O(L2) to O(L logL). Linformer (Wang et al., 2020) uses low-rank matrices to approximate the self-attention mechanism to further reduce both the computational cost and memory cost from O(L2) to O(L logL).
Despite the vigorous development of architectures, the working principles behind Transformers are still mysteries. The Transformer is often hard to train and we still know little about how it works and how the performance changes when the embedding dimension and depth increase. However, clarifying these problems is quite important because people are developing larger and deeper Transformers with additional training techniques to get better performance. Recently, there have been some preliminary works. (Xiong et al., 2020) and (Popel & Bojar, 2018) numerically illustrated the effect of tuning hyper-parameters on training Transformer. (Huang et al., 2020) explores the difficulty of optimizing the Transformer model, and proposes a new initialization method to benefit the
training of deeper Transformers. (Wang et al., 2019) also investigated on how to train huge Transformer models. (Wang et al., 2022) successfully trained a Transformer with a depth of 1000 layers. In this work, we make an initial attempt to clarify the working mechanism of the Transformer from the perspective of the ID of the Transformer representation.
Generally, people realize that the real-world data such as sounds, texts, images, etc, tends to possess some kind of low-dimensional structures. That is, only a small fraction of dimensions is required to characterize sampled data and underlying target relationships. It is reasonable to consider, for example, the dataset formed by all the 224 × 224 × 3 RGB pictures labeled as dogs. There are in principle 224×224×256×3 = 38535168 possibilities, but the “intrinsic” number of pictures of dogs recognized by people is usually much less, where considerable similarities are common. Many algorithms and techniques in deep learning formally exploit the ubiquity of these low-dimensional data structures, such as (Hinton & Salakhutdinov, 2006) and (Gonzalez & Balajewicz, 2018). Intrinsic dimension (ID) (Amsaleg et al., 2015),(Houle et al., 2012),(Cutler, 1993) is an important mathematical tool to characterize the geometrical structure of data. It represents the minimal number of parameters required for modeling certain ground truths, hopefully capturing the low-dimensional data structures. In this work, we mainly investigate the data representation by Transformers to uncover different and interesting phenomena via the concept of ID. Our contributions can be summarized as four aspects:
• We analyze the variation of ID for data representation learned by successive Transformer blocks. It appears a dimension reduction phenomenon across layers, which can be strengthened by deepening the architecture.
• We show the geometrical structures of Transformers for sequential modeling: it seems that Transformers can only achieve nonlinear dimension reduction when applied to text classification tasks.
• We explore the relationship among embedding dimension (ED), intrinsic dimension (ID) of learned representation and tasks performance, which motivates a straightforward interpretation on the benefit of increasing ED from the perspective of ID.
• We investigate the effect of training dataset reduction via sequential lengths, which shows negligible influence on the IDs and tasks performance. This motivates potentially a guidance for efficiency in practical applications.
2 RELATED WORK
TwoNN method. There are lots of works on how to estimate ID of the given datasets. For example, (Fukunaga & Olsen, 1971), (Bruske & Sommer, 1998) are methods based on PCA. (Levina & Bickel, 2004) is a method based on maximum likelihood estimation (MLE). (Costa & Hero, 2004) estimated the ID by using the so-called geodesic-minimal-spanning-tree. (Kégl, 2002) utilized capacity dimension to estimate intrinsic dimension . (Facco et al., 2017) presented an estimation approach named as TwoNN, which takes advantage of the closest and second closest samples to form modeling probability distributions. Considering its computational efficiency, the TwoNN method is adopted in this work to estimate intrinsic dimensions of the representations learned by Transformers.1
Mathematically, the TwoNN has the following procedure. Given the dataset D = {xi}Ni=1. For each data point xi, denote its distance to the closest and second closest sample as si,1 and si,2, respectively. The TwoNN method estimate the ID by modeling the statistics
si := si,2 si,1
.
Actually we can model the statistics si := si,2 si,1 by a Pareto distribution (Hussain et al., 2018),(Rootzén & Tajvidi, 2006) and hence get
1We refer https://github.com/ansuini/IntrinsicDimDeep for some codes.
P ((s1, s2, ..., sN ) | d) = dN N∏ i=1 s −(d+1) i
Here, the intrinsic dimension of the dataset D, namely d, can be easily estimated by a common maximum likelihood method. To be more clearly, we give a pipeline of TwoNN method below.
• Ramdomly select N data point, get D = {xi}Ni=1.
• For each xi in D, calculate the distance of closest and second closest data denoted as si,1 and si,2.
• For each xi in D, calculate the statistics si := si,2si,1 .
• Estimate d in P ((s1, s2, ..., sN ) | d) = dN ∏N i=1 s −(d+1) i by maximum likelihood method
and the d is the estimation of ID.
Intrinsic dimension in deep learning. The performance of intrinsic dimension in deep learning architectures has been also studied recently. For example, (Pope et al., 2021) generated fake images by GAN to control the upper bound of ID in order to verify the accuracy of the ID estimation method. (Ansuini et al., 2019) analyzed variation of ID of the representations cross the layer for some classical neural networks, such as the ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2014), on the image classification tasks. (Aghajanyan et al., 2020) studied the effect of pre-training on the intrinsic dimension for NLP tasks. However, compared to CNNs and other models applied to computer vision tasks, the related research on sequential models for NLP tasks such as Transformers are still limited despite of its great popularity. The current work aims to fill this gap.
3 RESULTS
We first introduce the experiment setup, then report our results in five aspects.
3.1 EXPERIMENT SETTING
Tasks. To explore the ID of Transformers in a convenient manner, we conduct numerical experiments on the text classification task. The reasons are as follows. First, the text classification is a representative but important task in natural language processing, and has wide applications such as spam detection (Crawford et al., 2015), (Asghar et al., 2020), text style classification (Wu et al., 2019), (Sudhakar et al., 2019), sentiment analysis (Medhat et al., 2014), (Xu et al., 2019) and so on. In addition, a great number of works, such as (Devlin et al., 2018), (Wang et al., 2020), (Shaheen et al., 2020) and so on, have shown a great success of the Transformer architecture applied to the classification task. Moreover, people usually use only a series of encoder blocks (with an additional MLP (Rosenblatt, 1961) block for classification) in the framework of Transformer when conducting the text classification, which implies the convenience for ID analysis. As a comparison, for other tasks where decoders are necessary, e.g. the text generation, one may encounter difficulties for the layer-wise ID computation and analysis.
Datasets. The experiments are performed on three datasets: IMDB (Maas et al., 2011), AG (Zhang et al., 2015) and SST2 (Socher et al., 2013). Among them, IMDB (Maas et al., 2011) is a two classification movie review dataset with 25,000 training data and 25,000 test data; AG (Zhang et al., 2015) is a news articles four classification dataset with 120,000 training data and 7,600 test data; SST2 is a two classification movie review dataset with around 7,000 training data and 2,000 test data.
Models. We use the classic Transformer model (Vaswani et al., 2017) with successive encoder blocks to extract features and learn data representation of the full input texts. As a common practice and also for simplicity, pure MLPs (Rosenblatt, 1961) are applied for the terminal classification
layer, which maintains to a large extent the dominating effect on the final performance of the Transformer (encoders).2
Goals and related hyper-parameters. The present work aims to study and analyze the relationship between intrinsic dimensions of the representations learned by Transformers and the corresponding classification performance. To achieve this and conduct more comprehensive and rigorous experiments, we set to vary the following hyper-parameters:
• D: depth, i.e. the total number of layers of the Transformer. • ED: embedding (Mikolov et al., 2013) dimension. For simplicity and by convention (fol-
lowing the classic work (Vaswani et al., 2017)), we set the dimension of each hidden layer to be equal.
That is, the goal is to find out their influences on the ID and classification accuracy. We unfold the analysis from the following aspects.
3.2 THE LAYER-WISE VARIATION OF INTRINSIC DIMENSIONS
We first study the variation of ID across different layers. For a Transformer model with depth D and hidden dimensions {nl}Dl=1, every input data is successively mapped into a nl-dimensional vector space, l = 1, 2, · · · , D. However, the hidden dimension is incapable of characterizing the inherent geometrical structures of data. Here, we use the TwoNN (Facco et al., 2017) method to compute the respective ID per layer.
Following the setting presented in Section 3.1, we perform experiments on three datasets (AG, IMDB and SST2), using the Transformer model with varied depths and embedding dimensions: D ∈ {4, 6, 8}, and ED ∈ {128, 256, 512}. The intrinsic dimension and its variation with respect to different layers are shown in Figure 1. The horizontal axis of Figure 1 represents each layer, where emb and layer i denote the embedding layer and the i-th encoder layer, respectively. The vertical axis shows the corresponding ID. Every single line in Figure 1 represents the layer-wise variation of ID under a certain hyper-parameters configuration.
From Figure 1, one can straightforwardly obtain the following observations:
• With the increase of depth, the overall intrinsic dimension appears a downward trend. Generally, the ID may only increase at the first encoder layer (see Figure 1 (c)), and then decreases through the following layers, and reaches the minimum at the last layer. This
2We also refer https://github.com/lyeoni/nlp-tutorial/tree/master/ text-classification-transformer for some codes.
decrease process of ID across layers can be regarded as a type of dimension reduction for Transformers along with the extraction of effective information for the final classification.
• When fixing the model depth on a certain dataset, we find that the intrinsic dimension basically increases with the embedding dimension. In fact, according to Figure 1, the above conclusion always holds for all depths and datasets. This result is natural at the first glance, since when increasing the embedding dimension, the (initial) intrinsic dimension generally increases as well, which leads to an increment of the terminal ID.
To scope the dimension reduction effect in a detailed manner, we can further check the variation of ratios of intrinsic dimension over embedding dimension and hidden dimensions, which is convenient and straightforward since the last two has been set to be equal. It is shown that the ratio is about O(10−1) at the embedding layer (approximately 0.1-0.3), while it is notably reduced to O(10−2) at the final layer.
3.3 CORRELATIONS BETWEEN INTRINSIC DIMENSIONS AND CLASSIFICATION ACCURACY
Since ID characterizes the intrinsic (geometrical) structures of data distribution, and the classification performance directly depends on the final representation extracted by the last hidden layer of Transformers, it is reasonable to believe that the ID of the last hidden layer (terminal ID) is correlated with the predicted classification accuracy.
Therefore, we numerically investigate the relationship between terminal ID and corresponding classification error rate for various configurations of hyper-parameters and datasets, see Figure 2. For robustness, the training costs several independent runs using randomized initialization to ensure the convergence. The horizontal axis in Figure 2 represents the error rate of classification and vertical axis shows the ID of representation learned by the last hidden layer. The dash lines connect the results for different depths under a fixed embedding dimension, with the stars as mean values for both terminal ID and error rates.
From Figure 2, one can observe that although all experiments are performed under the same type of hypothesis space (i.e. Transformers), there are remarkable differences in term of classification performance and terminal ID along with the model size (the ED herein). As is shown in the solid lines in Figure 2, the terminal ID basically increases with the embedding dimension, while the classification error changes adversely. Interestingly, the uncovered phenomenon for Transformers applied to NLP tasks is opposite to that in (Ansuini et al., 2019), where the terminal ID of CNNs is positively correlated with the classification error rate in image modeling.
The underlying interpretation may be as follows. When the embedding dimension increases, according to the discussion in Section 3.2, we get a larger ID for the representation of input data, resulting in an increment of the terminal ID. Meanwhile, a higher embedding dimension, as well as hidden
dimensions, definitely extend the corresponding hypothesis space for modeling. Hence, it is reasonable to gain a better classification accuracy. Based on this phenomenon, increasing the embedding dimension helps to enhance classification performance via enlarging the intrinsic dimension.
This point may motivate a further extension for practical guidance in applications: one can imagine a principled method to utilize the terminal ID as a posterior “indicator” for generalization. That is, by monitoring the variation of terminal ID during training, we may achieve to guarantee better classification performance without using a completely new dataset for both validation and test. This would benefit a lot when encountering limited data in practical applications and hence deserves to explore in the future work.
Remark 1 Our investigation shows the inherent nature of transformer models on textual tasks. For ViT models, the conclusion conclusion no longer holds as in Table 1. We notice that either the ViT embedding dimension or the ViT depth influence have little effect on ID.
3.4 A PRINCIPAL COMPONENT ANALYSIS VIEWPOINT OF DATA REPRESENTATION
There are many tools to estimate the intrinsic dimension of data representation, such as a series of methods based on principal component analysis (PCA) ((Fukunaga & Olsen, 1971), (Bruske & Sommer, 1998), TwoNN (Facco et al., 2017) and so on). Due to the computational efficiency, the TwoNN algorithm is selected. One can refer to Section 2 for further details.
According to the results shown in Figure 1, we conclude that the intrinsic dimensions of Transformers are much smaller than embedding and hidden dimensions. In this section, we will further illustrate that the data representation learned by Transformers exists on low-dimensional but curved manifolds instead of flat subspaces, hence are incapable of model reduction via linear methods.
To achieve this, we perform the classic PCA (Pearson, 1901) method on the normalized covariance matrix of each layer in Transformers for varied hyper-parameters and all three datasets. Figure 3 shows the results obtained on the last hidden layer. The horizontal axis represents the order of eigenvalues of data representation in a descending sort. The vertical axis shows the value of
corresponding eigenvalues. The green and red vertical dotted lines denote the number of components required to capture 50% and 85% of the variance in data representation. Here, we call the abscissa indicated by the red line as PCA-ID, meaning the “pseudo” intrinsic dimension computed by a direct PCA method.
It is shown that the ID derived from the TwoNN method is much smaller than the PCA-ID. For example, the ID shown in Figure 1 (a) is about 34 for ED = 256 on the AG dataset, while the PCAID shown in Figure 3 (a) is about 180 under the same setting, which is 5-6 times larger. Furthermore, the ratio of PCA-ID with respect to embedding dimension is about 0.7-0.9, which is much larger than that of ID (0.05-0.15). The great difference between ID inferred by TwoNN and PCA method shows the strong nonlinearity in the correlations among data samples. Based on the above results, we conclude that the space where data representation is located is totally not a linear subspace, but a certain curved manifold, which prevents people from performing the basic linear model reduction.
3.5 THE EFFECT OF INTRINSIC DIMENSION REDUCTION WITH RESPECT TO DEPTH
In Transformers as well as other neural network architectures, the model depth always plays an important role. A shallow model may have weak representation ability and poor training performance, while a quite deep model may lead to generalization issues such as overfitting (poor test performance) and requires unacceptable computation and memory cost. In this section, we further investigate the dependence of ID variation on the depth.
According to Figure 1, an ID reduction phenomenon across layers appears. To further track its effect with respect to the model depth, one can naturally focus on the gaps between IDs of embedding layers and last hidden layers. Since the relevant ID results have been already shown in Figure 1, we just summarize them in Table 2, 3 and 4.3 It is straightforward to have the following observations:
• Fix the dataset and embedding dimension, the IDs of embedding layers almost remain the same despite the change of depth.
• Fix the dataset and embedding dimension, the IDs of last hidden layers often decrease significantly with the increase of depth.
3Here, the depth 6 case is not included due to space constraints.
• Combining the above two points gives that, the Transformer model shows a larger dimension reduction across layers for deeper architectures.
A direct explanation for the above phenomena is as follows. When fixing the embedding dimension, the learned representation of input data basically remains accordant, which implies similar IDs of embedding layers. Meanwhile, according to Figure 1 and the discussion in Section 3.2, the ID reduction effect may strengthen through more layers, i.e. deeper models.
3.6 INSTANCES OF DATA REDUCTION: INFLUENCE OF SEQUENTIAL LENGTHS
In practice, the Transformer model usually possesses massive parameters and hence requires large dataset to guarantee reasonable training performance, resulting in enormous space and time costs. Therefore, it would be meaningful if one can reduce the size of training dataset (“data reduction”) without significant damages on the test performance. Motivated by this, we aim to investigate the feasibility and procedure of data reduction in the training of Transformers from the viewpoint of intrinsic dimension (of data representation). It is shown that unlike other hyper-parameters such as the embedding dimension, the intrinsic dimension hardly changes with the sequential length of data samples. This motivates potential chances to achieve data reduction by appropriately discarding training samples.
As an initial attempt, we first focus on the sequential length of samples in the training dataset. Specifically, given a dataset, we first sort all the sentences used for training by their lengths (i.e. word counts, see the length distributions shown in Figure 4), and then form a “long-set” by selecting the top 80% longest sentences in the training dataset. Similarly, one can form a corresponding “shortset” by selecting the top 80% shortest sentences, while the original training dataset without any reduction is called as ”full-set”. In principle, we only remove extreme cases such as too short/long samples, to hopefully avoid affecting the training data and performance. Naturally, the test datasets remain unchanged.
We conduct experiments on these three training sub-datasets: long-set, short-set and full-set, for various configurations of hyper-parameters (mainly embedding dimensions) and datasets. For each
Transformer model trained under the above settings, we train and record the IDs of embedding layers as well as the final classification error rates (on the full test datasets) in Table 5.
There is only a key phenomenon shown in Table 5. That is, if we check the results in Table 5 row by row, both the IDs of embedding layers and classification errors do not change significantly, despite that each Transformer model is trained on different subsets of the original training dataset. This helps to verify the validity of data reduction in training, at least in the aspect of sequential lengths. It would be valuable in applications where sufficient data is unavailable, and hence worthy of exploration in the future work.
4 CONCLUSION
In this work, we propose a new perspective to understand the mechanism of Transformers, which is related to intrinsic dimensions of data representation. Many interesting phenomena are numerically uncovered to reveal the intricate relationships between intrinsic dimensions and task performance, with respect to hyper-parameters such as depths, embedding dimensions of models and sequential lengths of data. We form a series of empirical conclusions. On one hand, for the influence of hyper-parameters on intrinsic dimensions and classification tasks performance, it is shown that there are positive correlations among embedding dimensions, intrinsic dimensions and the classification accuracy. In addition, the intrinsic dimension reduction across layers exists, which can be strengthened by deepening architectures. On the other hand, for the interaction between model and data (i.e. modeling effect), we give numerical evidence that the data representation learned by Transformers lies on curved manifolds. Furthermore, the data reduction in training can be valid, which possibly motivates efficient methods to utilize data and applicable guidance for practical learning. This deserves further exploration in the future. Certainly, the outlook is not limited. We intend to extend the current research on classification to more general settings, particularly on generative tasks. Moreover, the present work focuses on the basic transformer network, and it is also necessary to further investigate the most commonly-used architectures such as typical pre-trained language models. Apart from that, it is our goal to perform quantitative analysis to complete the theoretical gaps of Transformers and intrinsic dimensions herein. | 1. What is the focus of the paper, and what are the research questions or hypotheses being investigated?
2. What are the strengths and weaknesses of the paper, particularly regarding the methodology and results?
3. Do you have any concerns or questions about the study's findings, especially regarding their contradictions with previous studies?
4. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work carries out an empirical study of intrinsic dimensionality (ID) and transformers. Specifically a study of relationships between ID, embedding dimension, depth, and task performance is carried out. In contrast to similar studies carried out on vision tasks with CNNs, it is observed that higher ID at the penultimate layer correlates with generalization. Other empirical observations such as layer-wise ID decreasing with depth are made. Lastly an empirical study of sequence length, ID, and task performance is carried out.
Strengths And Weaknesses
[+] Investigation of ID and transformers is new.
[-] Not clear what the following claim about data reduction guarantee means: "we explore the effect of sequential lengths on the ID and tasks performance, which guarantees the validity of data reduction in training" (Abstract)
[-] Positive correlations not evident in Figure 2. They appear to be negative correlations.
[-] Need greater confirmation of the finding that higher ID in the penultimate, since this contradicts findings in the literature (Ansuini 2019).
Clarity, Quality, Novelty And Reproducibility
Unfortunately the presentation of this paper appears rushed. Plots are difficult to read. This work is not clearly written.
This work appears to be on a novel topic. I cannot find prior work which replicates the claims in this paper.
I did not find any code submitted to judge the reproducibility of the work. |
ICLR | Title
An Intrinsic Dimension Perspective of Transformers for Sequential Modeling
Abstract
Transformers have gained great popularity for sequential modeling, especially in fields such as natural language processing (NLP). Recently, numerous architectures based on the Transformer framework are proposed, leading to great achievements in applications. However, the working principles behind still remain mysterious. In this work, we numerically investigate the geometrical properties of data representation learned by Transformers, via a mathematical concept called intrinsic dimension (ID), which can be viewed as the minimal number of parameters required for modeling. A series of experiments, mainly focusing on text classification tasks, backs up the following empirical claims on relationships among embedding dimension, depth, respective ID per layer and tasks performance. First, we surprisingly observe that a higher ID (of terminal features extracted by Transformers) typically implies a lower classification error rate. This is contrary to that of CNNs (or other models) performed on image classification tasks. In addition, it is shown that the ID per layer tends to decrease as the depth increases, and this reduction usually appears more significant for deeper architectures. Moreover, we give numerical evidence on geometrical structures of data representation learned by Transformers, where only the nonlinear dimension reduction can be achieved. Finally, we explore the effect of sequential lengths on the ID and tasks performance, which guarantees the validity of data reduction in training. We hope that these findings can play a guiding role in hyper-parameters selection and dimension/data reduction for Transformers on text classification and other mainstream NLP tasks.
1 INTRODUCTION
Transformers (Vaswani et al., 2017) have made a great difference in many machine learning fields, particularly leading to significant advances in natural language processing (NLP) and computer vision (CV). It has been shown that the Transformer architecture is capable of handling large-scale datasets, usually with the help of sufficiently many parameters, with bert (Devlin et al., 2018), GPT3 (Brown et al., 2020) and bart (Lewis et al., 2019) as typical examples, and achieves impressive performance: When the Transformer is trained on enough samples, it often outperforms other competing models such as CNNs (Dosovitskiy et al., 2020). As the potential of Transformers is further tapped, a large number of variants of Transformers have emerged. For example, reformer (Kitaev et al., 2020) reduces the original computation complexity from O(L2) to O(L logL) by using locality-sensitive hashing, where L denotes the sequential length. Sparse Transformer (Child et al., 2019) introduces sparse factorizations to reduce the memory cost from O(L2) to O(L logL). Linformer (Wang et al., 2020) uses low-rank matrices to approximate the self-attention mechanism to further reduce both the computational cost and memory cost from O(L2) to O(L logL).
Despite the vigorous development of architectures, the working principles behind Transformers are still mysteries. The Transformer is often hard to train and we still know little about how it works and how the performance changes when the embedding dimension and depth increase. However, clarifying these problems is quite important because people are developing larger and deeper Transformers with additional training techniques to get better performance. Recently, there have been some preliminary works. (Xiong et al., 2020) and (Popel & Bojar, 2018) numerically illustrated the effect of tuning hyper-parameters on training Transformer. (Huang et al., 2020) explores the difficulty of optimizing the Transformer model, and proposes a new initialization method to benefit the
training of deeper Transformers. (Wang et al., 2019) also investigated on how to train huge Transformer models. (Wang et al., 2022) successfully trained a Transformer with a depth of 1000 layers. In this work, we make an initial attempt to clarify the working mechanism of the Transformer from the perspective of the ID of the Transformer representation.
Generally, people realize that the real-world data such as sounds, texts, images, etc, tends to possess some kind of low-dimensional structures. That is, only a small fraction of dimensions is required to characterize sampled data and underlying target relationships. It is reasonable to consider, for example, the dataset formed by all the 224 × 224 × 3 RGB pictures labeled as dogs. There are in principle 224×224×256×3 = 38535168 possibilities, but the “intrinsic” number of pictures of dogs recognized by people is usually much less, where considerable similarities are common. Many algorithms and techniques in deep learning formally exploit the ubiquity of these low-dimensional data structures, such as (Hinton & Salakhutdinov, 2006) and (Gonzalez & Balajewicz, 2018). Intrinsic dimension (ID) (Amsaleg et al., 2015),(Houle et al., 2012),(Cutler, 1993) is an important mathematical tool to characterize the geometrical structure of data. It represents the minimal number of parameters required for modeling certain ground truths, hopefully capturing the low-dimensional data structures. In this work, we mainly investigate the data representation by Transformers to uncover different and interesting phenomena via the concept of ID. Our contributions can be summarized as four aspects:
• We analyze the variation of ID for data representation learned by successive Transformer blocks. It appears a dimension reduction phenomenon across layers, which can be strengthened by deepening the architecture.
• We show the geometrical structures of Transformers for sequential modeling: it seems that Transformers can only achieve nonlinear dimension reduction when applied to text classification tasks.
• We explore the relationship among embedding dimension (ED), intrinsic dimension (ID) of learned representation and tasks performance, which motivates a straightforward interpretation on the benefit of increasing ED from the perspective of ID.
• We investigate the effect of training dataset reduction via sequential lengths, which shows negligible influence on the IDs and tasks performance. This motivates potentially a guidance for efficiency in practical applications.
2 RELATED WORK
TwoNN method. There are lots of works on how to estimate ID of the given datasets. For example, (Fukunaga & Olsen, 1971), (Bruske & Sommer, 1998) are methods based on PCA. (Levina & Bickel, 2004) is a method based on maximum likelihood estimation (MLE). (Costa & Hero, 2004) estimated the ID by using the so-called geodesic-minimal-spanning-tree. (Kégl, 2002) utilized capacity dimension to estimate intrinsic dimension . (Facco et al., 2017) presented an estimation approach named as TwoNN, which takes advantage of the closest and second closest samples to form modeling probability distributions. Considering its computational efficiency, the TwoNN method is adopted in this work to estimate intrinsic dimensions of the representations learned by Transformers.1
Mathematically, the TwoNN has the following procedure. Given the dataset D = {xi}Ni=1. For each data point xi, denote its distance to the closest and second closest sample as si,1 and si,2, respectively. The TwoNN method estimate the ID by modeling the statistics
si := si,2 si,1
.
Actually we can model the statistics si := si,2 si,1 by a Pareto distribution (Hussain et al., 2018),(Rootzén & Tajvidi, 2006) and hence get
1We refer https://github.com/ansuini/IntrinsicDimDeep for some codes.
P ((s1, s2, ..., sN ) | d) = dN N∏ i=1 s −(d+1) i
Here, the intrinsic dimension of the dataset D, namely d, can be easily estimated by a common maximum likelihood method. To be more clearly, we give a pipeline of TwoNN method below.
• Ramdomly select N data point, get D = {xi}Ni=1.
• For each xi in D, calculate the distance of closest and second closest data denoted as si,1 and si,2.
• For each xi in D, calculate the statistics si := si,2si,1 .
• Estimate d in P ((s1, s2, ..., sN ) | d) = dN ∏N i=1 s −(d+1) i by maximum likelihood method
and the d is the estimation of ID.
Intrinsic dimension in deep learning. The performance of intrinsic dimension in deep learning architectures has been also studied recently. For example, (Pope et al., 2021) generated fake images by GAN to control the upper bound of ID in order to verify the accuracy of the ID estimation method. (Ansuini et al., 2019) analyzed variation of ID of the representations cross the layer for some classical neural networks, such as the ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2014), on the image classification tasks. (Aghajanyan et al., 2020) studied the effect of pre-training on the intrinsic dimension for NLP tasks. However, compared to CNNs and other models applied to computer vision tasks, the related research on sequential models for NLP tasks such as Transformers are still limited despite of its great popularity. The current work aims to fill this gap.
3 RESULTS
We first introduce the experiment setup, then report our results in five aspects.
3.1 EXPERIMENT SETTING
Tasks. To explore the ID of Transformers in a convenient manner, we conduct numerical experiments on the text classification task. The reasons are as follows. First, the text classification is a representative but important task in natural language processing, and has wide applications such as spam detection (Crawford et al., 2015), (Asghar et al., 2020), text style classification (Wu et al., 2019), (Sudhakar et al., 2019), sentiment analysis (Medhat et al., 2014), (Xu et al., 2019) and so on. In addition, a great number of works, such as (Devlin et al., 2018), (Wang et al., 2020), (Shaheen et al., 2020) and so on, have shown a great success of the Transformer architecture applied to the classification task. Moreover, people usually use only a series of encoder blocks (with an additional MLP (Rosenblatt, 1961) block for classification) in the framework of Transformer when conducting the text classification, which implies the convenience for ID analysis. As a comparison, for other tasks where decoders are necessary, e.g. the text generation, one may encounter difficulties for the layer-wise ID computation and analysis.
Datasets. The experiments are performed on three datasets: IMDB (Maas et al., 2011), AG (Zhang et al., 2015) and SST2 (Socher et al., 2013). Among them, IMDB (Maas et al., 2011) is a two classification movie review dataset with 25,000 training data and 25,000 test data; AG (Zhang et al., 2015) is a news articles four classification dataset with 120,000 training data and 7,600 test data; SST2 is a two classification movie review dataset with around 7,000 training data and 2,000 test data.
Models. We use the classic Transformer model (Vaswani et al., 2017) with successive encoder blocks to extract features and learn data representation of the full input texts. As a common practice and also for simplicity, pure MLPs (Rosenblatt, 1961) are applied for the terminal classification
layer, which maintains to a large extent the dominating effect on the final performance of the Transformer (encoders).2
Goals and related hyper-parameters. The present work aims to study and analyze the relationship between intrinsic dimensions of the representations learned by Transformers and the corresponding classification performance. To achieve this and conduct more comprehensive and rigorous experiments, we set to vary the following hyper-parameters:
• D: depth, i.e. the total number of layers of the Transformer. • ED: embedding (Mikolov et al., 2013) dimension. For simplicity and by convention (fol-
lowing the classic work (Vaswani et al., 2017)), we set the dimension of each hidden layer to be equal.
That is, the goal is to find out their influences on the ID and classification accuracy. We unfold the analysis from the following aspects.
3.2 THE LAYER-WISE VARIATION OF INTRINSIC DIMENSIONS
We first study the variation of ID across different layers. For a Transformer model with depth D and hidden dimensions {nl}Dl=1, every input data is successively mapped into a nl-dimensional vector space, l = 1, 2, · · · , D. However, the hidden dimension is incapable of characterizing the inherent geometrical structures of data. Here, we use the TwoNN (Facco et al., 2017) method to compute the respective ID per layer.
Following the setting presented in Section 3.1, we perform experiments on three datasets (AG, IMDB and SST2), using the Transformer model with varied depths and embedding dimensions: D ∈ {4, 6, 8}, and ED ∈ {128, 256, 512}. The intrinsic dimension and its variation with respect to different layers are shown in Figure 1. The horizontal axis of Figure 1 represents each layer, where emb and layer i denote the embedding layer and the i-th encoder layer, respectively. The vertical axis shows the corresponding ID. Every single line in Figure 1 represents the layer-wise variation of ID under a certain hyper-parameters configuration.
From Figure 1, one can straightforwardly obtain the following observations:
• With the increase of depth, the overall intrinsic dimension appears a downward trend. Generally, the ID may only increase at the first encoder layer (see Figure 1 (c)), and then decreases through the following layers, and reaches the minimum at the last layer. This
2We also refer https://github.com/lyeoni/nlp-tutorial/tree/master/ text-classification-transformer for some codes.
decrease process of ID across layers can be regarded as a type of dimension reduction for Transformers along with the extraction of effective information for the final classification.
• When fixing the model depth on a certain dataset, we find that the intrinsic dimension basically increases with the embedding dimension. In fact, according to Figure 1, the above conclusion always holds for all depths and datasets. This result is natural at the first glance, since when increasing the embedding dimension, the (initial) intrinsic dimension generally increases as well, which leads to an increment of the terminal ID.
To scope the dimension reduction effect in a detailed manner, we can further check the variation of ratios of intrinsic dimension over embedding dimension and hidden dimensions, which is convenient and straightforward since the last two has been set to be equal. It is shown that the ratio is about O(10−1) at the embedding layer (approximately 0.1-0.3), while it is notably reduced to O(10−2) at the final layer.
3.3 CORRELATIONS BETWEEN INTRINSIC DIMENSIONS AND CLASSIFICATION ACCURACY
Since ID characterizes the intrinsic (geometrical) structures of data distribution, and the classification performance directly depends on the final representation extracted by the last hidden layer of Transformers, it is reasonable to believe that the ID of the last hidden layer (terminal ID) is correlated with the predicted classification accuracy.
Therefore, we numerically investigate the relationship between terminal ID and corresponding classification error rate for various configurations of hyper-parameters and datasets, see Figure 2. For robustness, the training costs several independent runs using randomized initialization to ensure the convergence. The horizontal axis in Figure 2 represents the error rate of classification and vertical axis shows the ID of representation learned by the last hidden layer. The dash lines connect the results for different depths under a fixed embedding dimension, with the stars as mean values for both terminal ID and error rates.
From Figure 2, one can observe that although all experiments are performed under the same type of hypothesis space (i.e. Transformers), there are remarkable differences in term of classification performance and terminal ID along with the model size (the ED herein). As is shown in the solid lines in Figure 2, the terminal ID basically increases with the embedding dimension, while the classification error changes adversely. Interestingly, the uncovered phenomenon for Transformers applied to NLP tasks is opposite to that in (Ansuini et al., 2019), where the terminal ID of CNNs is positively correlated with the classification error rate in image modeling.
The underlying interpretation may be as follows. When the embedding dimension increases, according to the discussion in Section 3.2, we get a larger ID for the representation of input data, resulting in an increment of the terminal ID. Meanwhile, a higher embedding dimension, as well as hidden
dimensions, definitely extend the corresponding hypothesis space for modeling. Hence, it is reasonable to gain a better classification accuracy. Based on this phenomenon, increasing the embedding dimension helps to enhance classification performance via enlarging the intrinsic dimension.
This point may motivate a further extension for practical guidance in applications: one can imagine a principled method to utilize the terminal ID as a posterior “indicator” for generalization. That is, by monitoring the variation of terminal ID during training, we may achieve to guarantee better classification performance without using a completely new dataset for both validation and test. This would benefit a lot when encountering limited data in practical applications and hence deserves to explore in the future work.
Remark 1 Our investigation shows the inherent nature of transformer models on textual tasks. For ViT models, the conclusion conclusion no longer holds as in Table 1. We notice that either the ViT embedding dimension or the ViT depth influence have little effect on ID.
3.4 A PRINCIPAL COMPONENT ANALYSIS VIEWPOINT OF DATA REPRESENTATION
There are many tools to estimate the intrinsic dimension of data representation, such as a series of methods based on principal component analysis (PCA) ((Fukunaga & Olsen, 1971), (Bruske & Sommer, 1998), TwoNN (Facco et al., 2017) and so on). Due to the computational efficiency, the TwoNN algorithm is selected. One can refer to Section 2 for further details.
According to the results shown in Figure 1, we conclude that the intrinsic dimensions of Transformers are much smaller than embedding and hidden dimensions. In this section, we will further illustrate that the data representation learned by Transformers exists on low-dimensional but curved manifolds instead of flat subspaces, hence are incapable of model reduction via linear methods.
To achieve this, we perform the classic PCA (Pearson, 1901) method on the normalized covariance matrix of each layer in Transformers for varied hyper-parameters and all three datasets. Figure 3 shows the results obtained on the last hidden layer. The horizontal axis represents the order of eigenvalues of data representation in a descending sort. The vertical axis shows the value of
corresponding eigenvalues. The green and red vertical dotted lines denote the number of components required to capture 50% and 85% of the variance in data representation. Here, we call the abscissa indicated by the red line as PCA-ID, meaning the “pseudo” intrinsic dimension computed by a direct PCA method.
It is shown that the ID derived from the TwoNN method is much smaller than the PCA-ID. For example, the ID shown in Figure 1 (a) is about 34 for ED = 256 on the AG dataset, while the PCAID shown in Figure 3 (a) is about 180 under the same setting, which is 5-6 times larger. Furthermore, the ratio of PCA-ID with respect to embedding dimension is about 0.7-0.9, which is much larger than that of ID (0.05-0.15). The great difference between ID inferred by TwoNN and PCA method shows the strong nonlinearity in the correlations among data samples. Based on the above results, we conclude that the space where data representation is located is totally not a linear subspace, but a certain curved manifold, which prevents people from performing the basic linear model reduction.
3.5 THE EFFECT OF INTRINSIC DIMENSION REDUCTION WITH RESPECT TO DEPTH
In Transformers as well as other neural network architectures, the model depth always plays an important role. A shallow model may have weak representation ability and poor training performance, while a quite deep model may lead to generalization issues such as overfitting (poor test performance) and requires unacceptable computation and memory cost. In this section, we further investigate the dependence of ID variation on the depth.
According to Figure 1, an ID reduction phenomenon across layers appears. To further track its effect with respect to the model depth, one can naturally focus on the gaps between IDs of embedding layers and last hidden layers. Since the relevant ID results have been already shown in Figure 1, we just summarize them in Table 2, 3 and 4.3 It is straightforward to have the following observations:
• Fix the dataset and embedding dimension, the IDs of embedding layers almost remain the same despite the change of depth.
• Fix the dataset and embedding dimension, the IDs of last hidden layers often decrease significantly with the increase of depth.
3Here, the depth 6 case is not included due to space constraints.
• Combining the above two points gives that, the Transformer model shows a larger dimension reduction across layers for deeper architectures.
A direct explanation for the above phenomena is as follows. When fixing the embedding dimension, the learned representation of input data basically remains accordant, which implies similar IDs of embedding layers. Meanwhile, according to Figure 1 and the discussion in Section 3.2, the ID reduction effect may strengthen through more layers, i.e. deeper models.
3.6 INSTANCES OF DATA REDUCTION: INFLUENCE OF SEQUENTIAL LENGTHS
In practice, the Transformer model usually possesses massive parameters and hence requires large dataset to guarantee reasonable training performance, resulting in enormous space and time costs. Therefore, it would be meaningful if one can reduce the size of training dataset (“data reduction”) without significant damages on the test performance. Motivated by this, we aim to investigate the feasibility and procedure of data reduction in the training of Transformers from the viewpoint of intrinsic dimension (of data representation). It is shown that unlike other hyper-parameters such as the embedding dimension, the intrinsic dimension hardly changes with the sequential length of data samples. This motivates potential chances to achieve data reduction by appropriately discarding training samples.
As an initial attempt, we first focus on the sequential length of samples in the training dataset. Specifically, given a dataset, we first sort all the sentences used for training by their lengths (i.e. word counts, see the length distributions shown in Figure 4), and then form a “long-set” by selecting the top 80% longest sentences in the training dataset. Similarly, one can form a corresponding “shortset” by selecting the top 80% shortest sentences, while the original training dataset without any reduction is called as ”full-set”. In principle, we only remove extreme cases such as too short/long samples, to hopefully avoid affecting the training data and performance. Naturally, the test datasets remain unchanged.
We conduct experiments on these three training sub-datasets: long-set, short-set and full-set, for various configurations of hyper-parameters (mainly embedding dimensions) and datasets. For each
Transformer model trained under the above settings, we train and record the IDs of embedding layers as well as the final classification error rates (on the full test datasets) in Table 5.
There is only a key phenomenon shown in Table 5. That is, if we check the results in Table 5 row by row, both the IDs of embedding layers and classification errors do not change significantly, despite that each Transformer model is trained on different subsets of the original training dataset. This helps to verify the validity of data reduction in training, at least in the aspect of sequential lengths. It would be valuable in applications where sufficient data is unavailable, and hence worthy of exploration in the future work.
4 CONCLUSION
In this work, we propose a new perspective to understand the mechanism of Transformers, which is related to intrinsic dimensions of data representation. Many interesting phenomena are numerically uncovered to reveal the intricate relationships between intrinsic dimensions and task performance, with respect to hyper-parameters such as depths, embedding dimensions of models and sequential lengths of data. We form a series of empirical conclusions. On one hand, for the influence of hyper-parameters on intrinsic dimensions and classification tasks performance, it is shown that there are positive correlations among embedding dimensions, intrinsic dimensions and the classification accuracy. In addition, the intrinsic dimension reduction across layers exists, which can be strengthened by deepening architectures. On the other hand, for the interaction between model and data (i.e. modeling effect), we give numerical evidence that the data representation learned by Transformers lies on curved manifolds. Furthermore, the data reduction in training can be valid, which possibly motivates efficient methods to utilize data and applicable guidance for practical learning. This deserves further exploration in the future. Certainly, the outlook is not limited. We intend to extend the current research on classification to more general settings, particularly on generative tasks. Moreover, the present work focuses on the basic transformer network, and it is also necessary to further investigate the most commonly-used architectures such as typical pre-trained language models. Apart from that, it is our goal to perform quantitative analysis to complete the theoretical gaps of Transformers and intrinsic dimensions herein. | 1. What is the focus of the paper, and what are the key discoveries made by the authors regarding learned representations in transformer architectures?
2. What are the strengths and weaknesses of the paper, particularly regarding its contributions and novelty compared to prior works?
3. Do you have any concerns regarding the value and correctness of the four major discoveries summarized in the review, especially concerning their relation to previous research?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, including figures and tables? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the Intrinsic Dimension (ID) of learned representations in a transformer architecture. The key discoveries are on text tasks:
Representation at deeper layers have smaller ID.
ID increases if we increase the width of the architecture (i.e., extrinsic dimension of each transformer layer)
ID at deepest layer is positively correlated with classification accuracy.
ID is smaller than global dimension estimator such as eigenvalues of representation matrix, implying a curved manifold distribution
Strengths And Weaknesses
The major concern is on the value of the four major discoveries summarized above.
Using ID to study the representations in transformers is not new. The paper misses an important reference, https://openreview.net/pdf?id=xYGNO86OWDH, which is quite recent and reasonably cited. In fact, that paper also uses ID and suggests same discovery 4 above. However, that paper shows ID increases at deeper layers, which contradicts discovery 1 in this paper. Without citing and discussing this reference, it is hard for me to judge which one is correct or more convincing.
Discovery 2 is not that surprising. On the other hand, discovery 3 depends on whether discovery 1 is true. The community basically agrees that we need deeper model to achieve better accuracy on tasks. In that sense, discovery 3 is really about whether ID decays or increases w.r.t. depth. Since discovery 1 could be wrong, it's hard to judge the correctness and value of discovery 3.
Clarity, Quality, Novelty And Reproducibility
A few figures are not clearly annotated. In specific:
Fig2: what are the small numbers along the lines? layer index? What is the solid line? Is is connecting the average of datapoints for each ED? Please add description in figure caption.
Fig 4: there is no label on plot axes. Hard to read
In addition, it's hard to read out a conclusion from table 5. |
ICLR | Title
An Intrinsic Dimension Perspective of Transformers for Sequential Modeling
Abstract
Transformers have gained great popularity for sequential modeling, especially in fields such as natural language processing (NLP). Recently, numerous architectures based on the Transformer framework are proposed, leading to great achievements in applications. However, the working principles behind still remain mysterious. In this work, we numerically investigate the geometrical properties of data representation learned by Transformers, via a mathematical concept called intrinsic dimension (ID), which can be viewed as the minimal number of parameters required for modeling. A series of experiments, mainly focusing on text classification tasks, backs up the following empirical claims on relationships among embedding dimension, depth, respective ID per layer and tasks performance. First, we surprisingly observe that a higher ID (of terminal features extracted by Transformers) typically implies a lower classification error rate. This is contrary to that of CNNs (or other models) performed on image classification tasks. In addition, it is shown that the ID per layer tends to decrease as the depth increases, and this reduction usually appears more significant for deeper architectures. Moreover, we give numerical evidence on geometrical structures of data representation learned by Transformers, where only the nonlinear dimension reduction can be achieved. Finally, we explore the effect of sequential lengths on the ID and tasks performance, which guarantees the validity of data reduction in training. We hope that these findings can play a guiding role in hyper-parameters selection and dimension/data reduction for Transformers on text classification and other mainstream NLP tasks.
1 INTRODUCTION
Transformers (Vaswani et al., 2017) have made a great difference in many machine learning fields, particularly leading to significant advances in natural language processing (NLP) and computer vision (CV). It has been shown that the Transformer architecture is capable of handling large-scale datasets, usually with the help of sufficiently many parameters, with bert (Devlin et al., 2018), GPT3 (Brown et al., 2020) and bart (Lewis et al., 2019) as typical examples, and achieves impressive performance: When the Transformer is trained on enough samples, it often outperforms other competing models such as CNNs (Dosovitskiy et al., 2020). As the potential of Transformers is further tapped, a large number of variants of Transformers have emerged. For example, reformer (Kitaev et al., 2020) reduces the original computation complexity from O(L2) to O(L logL) by using locality-sensitive hashing, where L denotes the sequential length. Sparse Transformer (Child et al., 2019) introduces sparse factorizations to reduce the memory cost from O(L2) to O(L logL). Linformer (Wang et al., 2020) uses low-rank matrices to approximate the self-attention mechanism to further reduce both the computational cost and memory cost from O(L2) to O(L logL).
Despite the vigorous development of architectures, the working principles behind Transformers are still mysteries. The Transformer is often hard to train and we still know little about how it works and how the performance changes when the embedding dimension and depth increase. However, clarifying these problems is quite important because people are developing larger and deeper Transformers with additional training techniques to get better performance. Recently, there have been some preliminary works. (Xiong et al., 2020) and (Popel & Bojar, 2018) numerically illustrated the effect of tuning hyper-parameters on training Transformer. (Huang et al., 2020) explores the difficulty of optimizing the Transformer model, and proposes a new initialization method to benefit the
training of deeper Transformers. (Wang et al., 2019) also investigated on how to train huge Transformer models. (Wang et al., 2022) successfully trained a Transformer with a depth of 1000 layers. In this work, we make an initial attempt to clarify the working mechanism of the Transformer from the perspective of the ID of the Transformer representation.
Generally, people realize that the real-world data such as sounds, texts, images, etc, tends to possess some kind of low-dimensional structures. That is, only a small fraction of dimensions is required to characterize sampled data and underlying target relationships. It is reasonable to consider, for example, the dataset formed by all the 224 × 224 × 3 RGB pictures labeled as dogs. There are in principle 224×224×256×3 = 38535168 possibilities, but the “intrinsic” number of pictures of dogs recognized by people is usually much less, where considerable similarities are common. Many algorithms and techniques in deep learning formally exploit the ubiquity of these low-dimensional data structures, such as (Hinton & Salakhutdinov, 2006) and (Gonzalez & Balajewicz, 2018). Intrinsic dimension (ID) (Amsaleg et al., 2015),(Houle et al., 2012),(Cutler, 1993) is an important mathematical tool to characterize the geometrical structure of data. It represents the minimal number of parameters required for modeling certain ground truths, hopefully capturing the low-dimensional data structures. In this work, we mainly investigate the data representation by Transformers to uncover different and interesting phenomena via the concept of ID. Our contributions can be summarized as four aspects:
• We analyze the variation of ID for data representation learned by successive Transformer blocks. It appears a dimension reduction phenomenon across layers, which can be strengthened by deepening the architecture.
• We show the geometrical structures of Transformers for sequential modeling: it seems that Transformers can only achieve nonlinear dimension reduction when applied to text classification tasks.
• We explore the relationship among embedding dimension (ED), intrinsic dimension (ID) of learned representation and tasks performance, which motivates a straightforward interpretation on the benefit of increasing ED from the perspective of ID.
• We investigate the effect of training dataset reduction via sequential lengths, which shows negligible influence on the IDs and tasks performance. This motivates potentially a guidance for efficiency in practical applications.
2 RELATED WORK
TwoNN method. There are lots of works on how to estimate ID of the given datasets. For example, (Fukunaga & Olsen, 1971), (Bruske & Sommer, 1998) are methods based on PCA. (Levina & Bickel, 2004) is a method based on maximum likelihood estimation (MLE). (Costa & Hero, 2004) estimated the ID by using the so-called geodesic-minimal-spanning-tree. (Kégl, 2002) utilized capacity dimension to estimate intrinsic dimension . (Facco et al., 2017) presented an estimation approach named as TwoNN, which takes advantage of the closest and second closest samples to form modeling probability distributions. Considering its computational efficiency, the TwoNN method is adopted in this work to estimate intrinsic dimensions of the representations learned by Transformers.1
Mathematically, the TwoNN has the following procedure. Given the dataset D = {xi}Ni=1. For each data point xi, denote its distance to the closest and second closest sample as si,1 and si,2, respectively. The TwoNN method estimate the ID by modeling the statistics
si := si,2 si,1
.
Actually we can model the statistics si := si,2 si,1 by a Pareto distribution (Hussain et al., 2018),(Rootzén & Tajvidi, 2006) and hence get
1We refer https://github.com/ansuini/IntrinsicDimDeep for some codes.
P ((s1, s2, ..., sN ) | d) = dN N∏ i=1 s −(d+1) i
Here, the intrinsic dimension of the dataset D, namely d, can be easily estimated by a common maximum likelihood method. To be more clearly, we give a pipeline of TwoNN method below.
• Ramdomly select N data point, get D = {xi}Ni=1.
• For each xi in D, calculate the distance of closest and second closest data denoted as si,1 and si,2.
• For each xi in D, calculate the statistics si := si,2si,1 .
• Estimate d in P ((s1, s2, ..., sN ) | d) = dN ∏N i=1 s −(d+1) i by maximum likelihood method
and the d is the estimation of ID.
Intrinsic dimension in deep learning. The performance of intrinsic dimension in deep learning architectures has been also studied recently. For example, (Pope et al., 2021) generated fake images by GAN to control the upper bound of ID in order to verify the accuracy of the ID estimation method. (Ansuini et al., 2019) analyzed variation of ID of the representations cross the layer for some classical neural networks, such as the ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2014), on the image classification tasks. (Aghajanyan et al., 2020) studied the effect of pre-training on the intrinsic dimension for NLP tasks. However, compared to CNNs and other models applied to computer vision tasks, the related research on sequential models for NLP tasks such as Transformers are still limited despite of its great popularity. The current work aims to fill this gap.
3 RESULTS
We first introduce the experiment setup, then report our results in five aspects.
3.1 EXPERIMENT SETTING
Tasks. To explore the ID of Transformers in a convenient manner, we conduct numerical experiments on the text classification task. The reasons are as follows. First, the text classification is a representative but important task in natural language processing, and has wide applications such as spam detection (Crawford et al., 2015), (Asghar et al., 2020), text style classification (Wu et al., 2019), (Sudhakar et al., 2019), sentiment analysis (Medhat et al., 2014), (Xu et al., 2019) and so on. In addition, a great number of works, such as (Devlin et al., 2018), (Wang et al., 2020), (Shaheen et al., 2020) and so on, have shown a great success of the Transformer architecture applied to the classification task. Moreover, people usually use only a series of encoder blocks (with an additional MLP (Rosenblatt, 1961) block for classification) in the framework of Transformer when conducting the text classification, which implies the convenience for ID analysis. As a comparison, for other tasks where decoders are necessary, e.g. the text generation, one may encounter difficulties for the layer-wise ID computation and analysis.
Datasets. The experiments are performed on three datasets: IMDB (Maas et al., 2011), AG (Zhang et al., 2015) and SST2 (Socher et al., 2013). Among them, IMDB (Maas et al., 2011) is a two classification movie review dataset with 25,000 training data and 25,000 test data; AG (Zhang et al., 2015) is a news articles four classification dataset with 120,000 training data and 7,600 test data; SST2 is a two classification movie review dataset with around 7,000 training data and 2,000 test data.
Models. We use the classic Transformer model (Vaswani et al., 2017) with successive encoder blocks to extract features and learn data representation of the full input texts. As a common practice and also for simplicity, pure MLPs (Rosenblatt, 1961) are applied for the terminal classification
layer, which maintains to a large extent the dominating effect on the final performance of the Transformer (encoders).2
Goals and related hyper-parameters. The present work aims to study and analyze the relationship between intrinsic dimensions of the representations learned by Transformers and the corresponding classification performance. To achieve this and conduct more comprehensive and rigorous experiments, we set to vary the following hyper-parameters:
• D: depth, i.e. the total number of layers of the Transformer. • ED: embedding (Mikolov et al., 2013) dimension. For simplicity and by convention (fol-
lowing the classic work (Vaswani et al., 2017)), we set the dimension of each hidden layer to be equal.
That is, the goal is to find out their influences on the ID and classification accuracy. We unfold the analysis from the following aspects.
3.2 THE LAYER-WISE VARIATION OF INTRINSIC DIMENSIONS
We first study the variation of ID across different layers. For a Transformer model with depth D and hidden dimensions {nl}Dl=1, every input data is successively mapped into a nl-dimensional vector space, l = 1, 2, · · · , D. However, the hidden dimension is incapable of characterizing the inherent geometrical structures of data. Here, we use the TwoNN (Facco et al., 2017) method to compute the respective ID per layer.
Following the setting presented in Section 3.1, we perform experiments on three datasets (AG, IMDB and SST2), using the Transformer model with varied depths and embedding dimensions: D ∈ {4, 6, 8}, and ED ∈ {128, 256, 512}. The intrinsic dimension and its variation with respect to different layers are shown in Figure 1. The horizontal axis of Figure 1 represents each layer, where emb and layer i denote the embedding layer and the i-th encoder layer, respectively. The vertical axis shows the corresponding ID. Every single line in Figure 1 represents the layer-wise variation of ID under a certain hyper-parameters configuration.
From Figure 1, one can straightforwardly obtain the following observations:
• With the increase of depth, the overall intrinsic dimension appears a downward trend. Generally, the ID may only increase at the first encoder layer (see Figure 1 (c)), and then decreases through the following layers, and reaches the minimum at the last layer. This
2We also refer https://github.com/lyeoni/nlp-tutorial/tree/master/ text-classification-transformer for some codes.
decrease process of ID across layers can be regarded as a type of dimension reduction for Transformers along with the extraction of effective information for the final classification.
• When fixing the model depth on a certain dataset, we find that the intrinsic dimension basically increases with the embedding dimension. In fact, according to Figure 1, the above conclusion always holds for all depths and datasets. This result is natural at the first glance, since when increasing the embedding dimension, the (initial) intrinsic dimension generally increases as well, which leads to an increment of the terminal ID.
To scope the dimension reduction effect in a detailed manner, we can further check the variation of ratios of intrinsic dimension over embedding dimension and hidden dimensions, which is convenient and straightforward since the last two has been set to be equal. It is shown that the ratio is about O(10−1) at the embedding layer (approximately 0.1-0.3), while it is notably reduced to O(10−2) at the final layer.
3.3 CORRELATIONS BETWEEN INTRINSIC DIMENSIONS AND CLASSIFICATION ACCURACY
Since ID characterizes the intrinsic (geometrical) structures of data distribution, and the classification performance directly depends on the final representation extracted by the last hidden layer of Transformers, it is reasonable to believe that the ID of the last hidden layer (terminal ID) is correlated with the predicted classification accuracy.
Therefore, we numerically investigate the relationship between terminal ID and corresponding classification error rate for various configurations of hyper-parameters and datasets, see Figure 2. For robustness, the training costs several independent runs using randomized initialization to ensure the convergence. The horizontal axis in Figure 2 represents the error rate of classification and vertical axis shows the ID of representation learned by the last hidden layer. The dash lines connect the results for different depths under a fixed embedding dimension, with the stars as mean values for both terminal ID and error rates.
From Figure 2, one can observe that although all experiments are performed under the same type of hypothesis space (i.e. Transformers), there are remarkable differences in term of classification performance and terminal ID along with the model size (the ED herein). As is shown in the solid lines in Figure 2, the terminal ID basically increases with the embedding dimension, while the classification error changes adversely. Interestingly, the uncovered phenomenon for Transformers applied to NLP tasks is opposite to that in (Ansuini et al., 2019), where the terminal ID of CNNs is positively correlated with the classification error rate in image modeling.
The underlying interpretation may be as follows. When the embedding dimension increases, according to the discussion in Section 3.2, we get a larger ID for the representation of input data, resulting in an increment of the terminal ID. Meanwhile, a higher embedding dimension, as well as hidden
dimensions, definitely extend the corresponding hypothesis space for modeling. Hence, it is reasonable to gain a better classification accuracy. Based on this phenomenon, increasing the embedding dimension helps to enhance classification performance via enlarging the intrinsic dimension.
This point may motivate a further extension for practical guidance in applications: one can imagine a principled method to utilize the terminal ID as a posterior “indicator” for generalization. That is, by monitoring the variation of terminal ID during training, we may achieve to guarantee better classification performance without using a completely new dataset for both validation and test. This would benefit a lot when encountering limited data in practical applications and hence deserves to explore in the future work.
Remark 1 Our investigation shows the inherent nature of transformer models on textual tasks. For ViT models, the conclusion conclusion no longer holds as in Table 1. We notice that either the ViT embedding dimension or the ViT depth influence have little effect on ID.
3.4 A PRINCIPAL COMPONENT ANALYSIS VIEWPOINT OF DATA REPRESENTATION
There are many tools to estimate the intrinsic dimension of data representation, such as a series of methods based on principal component analysis (PCA) ((Fukunaga & Olsen, 1971), (Bruske & Sommer, 1998), TwoNN (Facco et al., 2017) and so on). Due to the computational efficiency, the TwoNN algorithm is selected. One can refer to Section 2 for further details.
According to the results shown in Figure 1, we conclude that the intrinsic dimensions of Transformers are much smaller than embedding and hidden dimensions. In this section, we will further illustrate that the data representation learned by Transformers exists on low-dimensional but curved manifolds instead of flat subspaces, hence are incapable of model reduction via linear methods.
To achieve this, we perform the classic PCA (Pearson, 1901) method on the normalized covariance matrix of each layer in Transformers for varied hyper-parameters and all three datasets. Figure 3 shows the results obtained on the last hidden layer. The horizontal axis represents the order of eigenvalues of data representation in a descending sort. The vertical axis shows the value of
corresponding eigenvalues. The green and red vertical dotted lines denote the number of components required to capture 50% and 85% of the variance in data representation. Here, we call the abscissa indicated by the red line as PCA-ID, meaning the “pseudo” intrinsic dimension computed by a direct PCA method.
It is shown that the ID derived from the TwoNN method is much smaller than the PCA-ID. For example, the ID shown in Figure 1 (a) is about 34 for ED = 256 on the AG dataset, while the PCAID shown in Figure 3 (a) is about 180 under the same setting, which is 5-6 times larger. Furthermore, the ratio of PCA-ID with respect to embedding dimension is about 0.7-0.9, which is much larger than that of ID (0.05-0.15). The great difference between ID inferred by TwoNN and PCA method shows the strong nonlinearity in the correlations among data samples. Based on the above results, we conclude that the space where data representation is located is totally not a linear subspace, but a certain curved manifold, which prevents people from performing the basic linear model reduction.
3.5 THE EFFECT OF INTRINSIC DIMENSION REDUCTION WITH RESPECT TO DEPTH
In Transformers as well as other neural network architectures, the model depth always plays an important role. A shallow model may have weak representation ability and poor training performance, while a quite deep model may lead to generalization issues such as overfitting (poor test performance) and requires unacceptable computation and memory cost. In this section, we further investigate the dependence of ID variation on the depth.
According to Figure 1, an ID reduction phenomenon across layers appears. To further track its effect with respect to the model depth, one can naturally focus on the gaps between IDs of embedding layers and last hidden layers. Since the relevant ID results have been already shown in Figure 1, we just summarize them in Table 2, 3 and 4.3 It is straightforward to have the following observations:
• Fix the dataset and embedding dimension, the IDs of embedding layers almost remain the same despite the change of depth.
• Fix the dataset and embedding dimension, the IDs of last hidden layers often decrease significantly with the increase of depth.
3Here, the depth 6 case is not included due to space constraints.
• Combining the above two points gives that, the Transformer model shows a larger dimension reduction across layers for deeper architectures.
A direct explanation for the above phenomena is as follows. When fixing the embedding dimension, the learned representation of input data basically remains accordant, which implies similar IDs of embedding layers. Meanwhile, according to Figure 1 and the discussion in Section 3.2, the ID reduction effect may strengthen through more layers, i.e. deeper models.
3.6 INSTANCES OF DATA REDUCTION: INFLUENCE OF SEQUENTIAL LENGTHS
In practice, the Transformer model usually possesses massive parameters and hence requires large dataset to guarantee reasonable training performance, resulting in enormous space and time costs. Therefore, it would be meaningful if one can reduce the size of training dataset (“data reduction”) without significant damages on the test performance. Motivated by this, we aim to investigate the feasibility and procedure of data reduction in the training of Transformers from the viewpoint of intrinsic dimension (of data representation). It is shown that unlike other hyper-parameters such as the embedding dimension, the intrinsic dimension hardly changes with the sequential length of data samples. This motivates potential chances to achieve data reduction by appropriately discarding training samples.
As an initial attempt, we first focus on the sequential length of samples in the training dataset. Specifically, given a dataset, we first sort all the sentences used for training by their lengths (i.e. word counts, see the length distributions shown in Figure 4), and then form a “long-set” by selecting the top 80% longest sentences in the training dataset. Similarly, one can form a corresponding “shortset” by selecting the top 80% shortest sentences, while the original training dataset without any reduction is called as ”full-set”. In principle, we only remove extreme cases such as too short/long samples, to hopefully avoid affecting the training data and performance. Naturally, the test datasets remain unchanged.
We conduct experiments on these three training sub-datasets: long-set, short-set and full-set, for various configurations of hyper-parameters (mainly embedding dimensions) and datasets. For each
Transformer model trained under the above settings, we train and record the IDs of embedding layers as well as the final classification error rates (on the full test datasets) in Table 5.
There is only a key phenomenon shown in Table 5. That is, if we check the results in Table 5 row by row, both the IDs of embedding layers and classification errors do not change significantly, despite that each Transformer model is trained on different subsets of the original training dataset. This helps to verify the validity of data reduction in training, at least in the aspect of sequential lengths. It would be valuable in applications where sufficient data is unavailable, and hence worthy of exploration in the future work.
4 CONCLUSION
In this work, we propose a new perspective to understand the mechanism of Transformers, which is related to intrinsic dimensions of data representation. Many interesting phenomena are numerically uncovered to reveal the intricate relationships between intrinsic dimensions and task performance, with respect to hyper-parameters such as depths, embedding dimensions of models and sequential lengths of data. We form a series of empirical conclusions. On one hand, for the influence of hyper-parameters on intrinsic dimensions and classification tasks performance, it is shown that there are positive correlations among embedding dimensions, intrinsic dimensions and the classification accuracy. In addition, the intrinsic dimension reduction across layers exists, which can be strengthened by deepening architectures. On the other hand, for the interaction between model and data (i.e. modeling effect), we give numerical evidence that the data representation learned by Transformers lies on curved manifolds. Furthermore, the data reduction in training can be valid, which possibly motivates efficient methods to utilize data and applicable guidance for practical learning. This deserves further exploration in the future. Certainly, the outlook is not limited. We intend to extend the current research on classification to more general settings, particularly on generative tasks. Moreover, the present work focuses on the basic transformer network, and it is also necessary to further investigate the most commonly-used architectures such as typical pre-trained language models. Apart from that, it is our goal to perform quantitative analysis to complete the theoretical gaps of Transformers and intrinsic dimensions herein. | 1. What is the focus of the paper regarding transformers and their performance?
2. What are the strengths and weaknesses of the proposed approach in terms of its empirical results and theoretical support?
3. Do you have any concerns or questions about the measurement of distance and dimensionality in the context of language tasks?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper numerically investigates the intrinsic dimension (ID) of transformers. They perform a series of experiments and conclude that a higher ID implies a lower classification error rate and ID of a layer decreases as the depth increases. They also look at geometrical structures learned by transformers for representing the data and the effect of sequence lengths on the ID and a tasks performance.
Strengths And Weaknesses
Strengths: It is nice to have empirical results that support theoretically obvious mathematical conclusions, I guess.
Weakness:
For language tasks, it is not clear how to measure the closest and second closest sample if the samples are text. Is the embedding used with an L2 norm etc.? Additionally, if the dimensionality of the samples is very large (even 128 creates a potentially high dimensional space), is measuring distance useful when "everything is far away from everything"? There should be more of a description that we applied some other paper, here is the link to that paper. Are there any nuances to applying the TwoNN method to transformers vs CNNs?
Isn't this obvious? "intrinsic dimension basically increases with the embedding dimension" - If the original space is larger, it is harder to "summarize" it in a lower dimensional space and would require more dimensions
"increase of depth, the overall intrinsic dimension appears a downward trend" - from the self attention mechanism, later layers are weighted combinations of earlier layers, so this is also relatively obvious
For Figure 2, is the error being measured from predictions made by a model using the ID features? Wouldn't error naturally increase if dimension reduction is being applied to aggressively and information is being lost?
Clarity, Quality, Novelty And Reproducibility
Clarity: Some of the grammar is a bit strange and confusing, but overall it is fine Originality / Quality: almost zero novelty, this paper is literally an application of a pre-existing method |
ICLR | Title
HyperTransformer: Attention-Based CNN Model Generation from Few Samples
Abstract
In this work we propose a HyperTransformer, a transformer based model that generates all weights of a CNN network directly from support samples. This approach uses a high-capacity model for encoding task-dependent variations in the weights of a smaller model. We show on multiple few-shot image classification benchmarks with different model sizes and datasets that our method beats or matches the performance of many traditional learning methods. Specifically, we show that for very small target architectures, our method can generate significantly better performing models than traditional few-shot learning approaches. For larger models we discover that generating the last layer alone allows us to produce competitive or better results while being end-to-end differentiable. Finally, we extend our approach to a semi-supervised regime utilizing unlabeled samples in the support set and further improving few-shot performance in the presence of unlabeled data.
1 INTRODUCTION
In few-shot learning, a conventional machine learning paradigm of fitting a parametric model to training data is taken to a limit of extreme data scarcity where entire categories are introduced with just one or few examples. A generic approach to solving this problem uses training data to identify parameters φ of a learner aφ that given a small batch of examples for a particular task (called a support set) can solve this task on unseen data (called a query set).
One broad family of few-shot image classification methods frequently referred to as metric-based learning, relies on pretraining an embedding eφ(·) and then using some distance in the embedding space to label query samples based on their closeness to known labeled support samples. These methods proved effective on numerous benchmarks (see Tian et al. (2020) for review and references), however the capabilities of the learner are limited by the capacity of the architecture itself, as these methods try to build a universal embedding function.
On the other hand, optimization-based methods such as seminal MAML algorithm (Finn et al., 2017) can fine-tune the embedding eφ by performing additional SGD updates on all parameters φ of the model producing it. This partially addresses the constraints of metric-based methods by learning a new embedding for each new task. However, in many of these methods, all the knowledge extracted during training on different tasks and describing the learner aφ still has to “fit” into the same number of parameters as the model itself. Such limitation becomes more severe as the target models get smaller, while the richness of the task set increases.
In this paper we propose a new few-shot learning approach that allows us to decouple the complexity of the task space from the complexity of individual tasks. The main idea is to use the transformer model (Vaswani et al., 2017) that given a few-shot task episode generates an entire inference model by producing all model weights in a single pass. This allows us to encode the intricacies of the available training data inside the transformer model, while still producing specialized tiny models that can solve individual tasks. Reducing the size of the generated model and moving the computational overhead to the transformer-based weight generator, we can lower the cost of the inference on new images. This can dramatically reduce the overall computation cost in cases, where the tasks change infrequently and hence the weight generator is only used sporadically.
We start by observing that the self-attention mechanism is well suited to be an underlying mechanism for a few-shot CNN weight generator. In contrast with earlier CNN- (Zhao et al., 2020) or
BiLSTM-based approaches (Ravi & Larochelle, 2017), the vanilla1 transformer model is invariant to sample permutations and can handle unbalanced datasets with a varying number of samples per category. Furthermore, we demonstrate that a single-layer self-attention model can replicate a simplified gradient-descent-based learning algorithm. Using a transformer-based model to generate the logits layer on top of a conventionally learned embedding, we achieve competitive results on several common few-shot learning benchmarks. Varying transformer parameters we demonstrate that this high performance can be attributed to additional capacity of the transformer model that decouples its complexity from that of the generated CNN.
We then extend our method to support unlabeled samples by using a special input token concatenated to unlabeled samples to encode unknown labels. In our experiments, we observe that using transformers with two or more layers, we achieve better performance by adding unlabeled data into the support set. We explain our results in Section 4.3 where we show that such a transformer with at least two layers can encode the nearest-neighbor style algorithm that associates unlabeled samples with similarly labeled examples. In essence, by training the weight generator to produce CNN models with best possible performance on a query set, we teach the transformer to utilize unlabeled samples without having to manually introduce additional optimization objectives.
We also show that by generating all layers of the CNN model we can improve both the training and the test accuracies of CNN models below a certain size. The training accuracy can be viewed as a capability of the generated CNN model to adapt to tasks seen at training time, whereas the test accuracy computed on unseen categories characterizes the generalization capability of this model adaptation mechanism. We empirically demonstrate the expected increase of the model training and test accuracies with the increase of the layer size and the number of generated layers (see Figure 3). Interestingly, generation of the logits layer alone appears to be “sufficient” above a certain model size threshold. This threshold is expected to depend on the variability and the complexity of the training tasks. We conjecture that this might reflect the fact that models are sufficiently expressive for the benchmarks we considered, or that additional regularization is needed to prevent overfitting of the meta-learning models.
Finally, in addition to being able to decouple the complexity of the task distribution from the complexity of individual tasks, another important advantage of our method is that it allows to do learning end-to-end without relying on complex nested gradients optimization and other meta-learning approaches, where the number of unrolls steps is large.
The paper is structured as follows. In Section 2, we discuss the few-shot learning problem setup and highlight related work. Section 3 introduces our approach, discusses the motivation for choosing an attention-based model and shows how our approach can be used to meta-learn semi-supervised learning algorithms. In Section 4, we discuss our experimental results. Finally, in Section 5, we provide concluding remarks.
2 PROBLEM SETUP AND RELATED WORK
2.1 FEW-SHOT LEARNING
The main goal of a few-shot learning algorithm is to use a set of training tasks Ttrain for finding a learner aφ parameterized by φ that given new task domains can train to recognize novel classes using just a few samples per each class. The learner aφ can be thought of as a function that maps task description T = {(xi, ci)}ti=1 containing k labeled input samples {xi, ci} from n classes, to the weights θ = aφ(T ) of a trained model f(x;θ). The parameters φ are meta-optimized to maximize the performance of the model f(x; aφ(TS)) generated using a support set TS with x drawn from a query set TQ. Each task T = (TS , TQ) is randomly drawn from a space of training tasks Ttrain. Typically, TS and TQ are generated by first randomly choosing several distinct classes from the training set and then sampling examples without replacement from these classes to generate TS and TQ. In a classical “n-way-k-shot” setting, n is the number of classes randomly sampled in each episode, and k is the number of samples for each class in the support set TS .
The quality of a particular few-shot learning algorithm is typically evaluated using a separate test space of tasks Ttest. By forming Ttest from novel classes unseen at training time, we can evaluate
1without attention masking or positional encodings
generalization of different learners aφ. Best algorithms are expected to capture the structure present in the training set and to perform well on novel concepts. This structure may, for example, include certain properties of the distributions pc(x) with c being the class label, or the presence of particular discriminative (or alternatively invariant) features in the tasks from Ttrain.
2.2 RELATED WORK
Few-shot learning received a lot of attention from the deep learning community and while there are hundreds of few-shot learning methods, several common themes emerged in the past years. Here we outline several existing approaches, show how they relate to our method and discuss the prior work related to it.
Metric-Based Learning. One family of approaches involves mapping input samples into an embedding space and then using some nearest neighbor algorithm that relies on the computation of distances from a query sample embedding to the embedding computed using support samples with known labels. The metric used to compute the distance can either be the same for all tasks, or can be task-dependent. This family of methods includes, for example, such methods as Siamese networks (Koch et al., 2015), Matching Networks (Vinyals et al., 2016), Prototypical Networks (Snell et al., 2017), Relation Networks (Sung et al., 2018) and TADAM (Oreshkin et al., 2018). It has recently been argued (Tian et al., 2020) that methods based on building a powerful sample representation can frequently outperform numerous other approaches including many optimization-based methods. However, such approaches essentially amount to the “one-model solves all” approach and thus require larger models than needed to solve individual tasks.
Optimization-Based Learning. An alternative approach that can adapt the embedding to a new task is to incorporate optimization within the learning process. A variety of such methods are based on the approach called Model-Agnostic Meta-Learning, or MAML (Finn et al., 2017). In MAML, θ = aφ is obtained by initializing a DNN at θ0 = φ and then performing one or more gradient descent updates on a classification loss function L, i.e., computing θk+1 = θk − γ · (∂L/∂θ)(T ;θk). This approach was later refined (Antoniou et al., 2019) and built upon giving rise to Reptile (Nichol et al., 2018), LEO (Rusu et al., 2019) and others. One limitation of various MAML-inspired methods is that the knowledge about the set of training tasks Ttrain is distilled into parameters φ that have the same dimensionality as the model parameters θ. Therefore, for a very lightweight model f(x;θ) the capacity of the task-adaptation learner aφ is still limited by the size of θ. Methods that use parameterized preconditioners that otherwise do not impact the model f(x;θ) can alleviate this issue, but as with MAML, such methods can be difficult to train (Antoniou et al., 2019).
Weight Modulation and Generation. The idea of using a task specification to directly generate or modulate model weights has been previously explored in the generalized supervised learning context (Ratzlaff & Li, 2019) and in specific language models (Mahabadi et al., 2021; Tay et al., 2021; Ye & Ren, 2021). Some few-shot learning methods described above also employ this approach and use task-specific generation or modulation of the weights of the final classification model. For example, in LGM-Net (Li et al., 2019b) the matching network approach is used to generate a few layers on top of a task-agnostic embedding. Another approach abbreviated as LEO (Rusu et al., 2019) utilized a similar weight generation method to generate initial model weights from the training dataset in a few-shot learning setting, much like what is proposed in this article. However, in (Rusu et al., 2019), the generated weights were also refined using several SGD steps similar to how it is done in MAML. Here we explore a similar idea, but largely inspired by the HYPERNETWORK approach (Ha et al., 2017), we instead propose to directly generate an entire task-specific CNN model. Unlike LEO, we do not rely on pre-computed embeddings for images and generate the model in a single step without additional SGD steps, which simplifies and stabilizes training.
Transformers in Computer Vision and Few-Shot Learning. Transformer models (Vaswani et al., 2017) originally proposed for natural language understanding applications had since become a useful tool in practically every subfield of deep learning. In computer vision, transformers have recently seen an explosion of applications ranging from state-of-the-art image classification results (Dosovitskiy et al., 2021; Touvron et al., 2021) to object detection (Carion et al., 2020; Zhu et al., 2021), segmentation (Ye et al., 2019), image super-resolution (Yang et al., 2020), image generation
(Chen et al., 2021) and many others. There are also several notable applications in few-shot image classification. For example, in Liu et al. (2021), the transformer model was used for generating universal representations in the multi-domain few-shot learning scenario. And closely related to our approach, in Ye et al. (2020), the authors proposed to accomplish embedding adaptation with the help of transformer models. Unlike our method that generates an entire end-to-end image classification model, this approach applied a task-dependent perturbation to an embedding generated by an independent task-agnostic feature extractor. In (Gidaris & Komodakis, 2018), a simplified attention-based model was used for the final layer generation.
3 OUR APPROACH
In this section, we describe our approach to few-shot learning that we call a HYPERTRANSFORMER (HT) and justify the choice of the self-attention mechanism as its basis.
3.1 FEW-SHOT LEARNING MODEL
A learner aφ (as introduced in Section 2.1) is the core of a few-shot learning algorithm and in this paper, we choose aφ to be a transformer-based model that takes a task description T = {(xi, ci)}ti=1 as input and produces weights for some or all layers {θ`|` ∈ [1, L]} of the generated CNN model. For layers with non-generated weights, they are learned in the end-to-end fashion as ordinary taskagnostic variables. In our experiments generated CNN models contain a set of convolutional layers and a final fully-connected logits layer. Here θ` are the parameters of the `-th layer and L is the total number of layers including the final logits layer (with index L). The weights are generated layer by layer starting from the first layer: θ1(T ) → θ2(θ1;T ) → · · · → θL(θ1,...,L−1;T ). Here we use θa,...,b as a short notation for (θa, θa+1, . . . , θb).
Shared and local features. The local features at layer ` are produced by a convolutional feature extractor h`φl(z ` i ) applied to the activations of the previous layer z ` i := f`−1(xi; θ1,...,`−1) for ` > 1 and z1i := xi. In other words, the transformer for layer ` receives
I` := {(
sφs(xi), h ` φl (f`−1(xi, θ1,...,`−1)), ci )} i=1,...,n .
The intuition behind the local feature extractor is that the choice of the layer weights should primarily depend on the inputs received by this layer. The shared features, on the other hand, are the same
for all layers and are produced by a separate trainable convolutional neural network sφs(xi). Their purpose is to modulate each layer’s weight generator with a global high-level sample embedding that, unlike the local embedding, is independent of the generated weights and is also fully shared between all layer generators.
Encoding and decoding transformer inputs and outputs. In our experiments we considered several different architectures of the transformer model, different ways of feeding I` into it and different ways of converting transformer outputs into the CNN weights.
Input samples were encoded by concatenating local and shared features from I` to trainable label embeddings ξ(c) with ξ : [1, n]→ Rd. Here n is the number of classes per episode and d is a chosen size of the label encoding. Note that the label embeddings do not contain semantic information, but rather act as placeholders to differentiate between distinct classes.
Along with the input samples, the sequence passed to the transformer was also populated with special learnable placeholder tokens2, each associated with a particular slice of the to-be-generated weight tensor. After the entire input sequence was processed by the transformer, we read out model outputs associated with the weight slice placeholder tokens and assembled output weight slices into the final weight tensors (see Fig. 2). In our experiments we also considered two different ways of encoding k × k × ninput × noutput convolutional kernels: (a) generating noutput weight slices with each output token having a dimension of k2 × ninput (we call it “output allocation”), (b) k2 weight slices of size ninput×noutput (“spatial allocation”). We show these results in Supplementary Materials.
Transformer model. The input discussed above was passed through a sequence of transformer encoder layers and the output weight tokens were then concatenated into a full weight tensor (see Fig. 2). Experiments with alternative architectures employing transformer decoders are discussed in Supplementary Materials.
Training the model. The weight generation model uses the support set to produce the weights of some or all CNN model layers. This generated CNN model then processes the query set samples and the cross-entropy classification loss is computed. The weight generation parameters φ are then learned by optimizing this loss function using stochastic gradient descent.
3.2 REASONING BEHIND THE SELF-ATTENTION MECHANISM
The choice of self-attention mechanism for the weight generator is not arbitrary. One motivating reason behind this choice is that the output produced by generator with the basic self-attention is by design invariant to input permutations, i.e., permutations of samples in the training dataset. This also makes it suitable for processing unbalanced batches and batches with a variable number of samples (see Sec. 4.3). Now we show that self-attention can express several intuitive algorithms, thus further motivating its utility.
2each token is a learnable d-dimensional vector padded with zeros to the size of the input sample token
Supervised learning. Self-attention in its rudimentary form can implement a method similar to cosine-similarity-based sample weighting encoded in the logits layer3 with weightsW :
Wij ∼ n∑
m=1
y (m) i e (m) j , (1)
which can also be viewed as a result of applying a single gradient descent step on the cross-entropy loss (see Appendix A). Here n is the total number of support-set samples {x(m)|m ∈ [1, n]} and e(m), y(m) are the embedding vector and the one-hot label corresponding to x(m).
The approach can be outlined (see more details in Appendix A) as follows. The self-attention operation receives encoded input samples Ik = (ξ(ck), ek) and weight placeholders (µ(i), 0) as its input. If each weight slice Wi,· represented by a particular token (µ(i), 0) produces a query Qi that only attends to keys Kk corresponding to samples Ik with labels ck matching i and the values of these samples are set to their embeddings ek, then the self-attention operation will essentially average the embeddings of all samples assigned label i thus matching the first term inW in equation 1.
Semi-supervised learning. A similar self-attention mechanism can also be designed to produce logits layer weights when the support set contains some unlabeled samples. The mechanism first propagates classes of labeled samples to similar unlabeled samples. This can be achieved by choosing the queries and the keys of the samples to be proportional to their embeddings. The attention map for sample i would then be defined by a softmax of ei · ej , or in other words would be proportional to exp(ei · ej). Choosing sample values to be proportional to the class tokens, we can then propagate a class of a labeled sample ej to a nearby unlabeled sample with embedding ei, for which ei · ej is sufficiently large. If the self-attention module is “residual”, i.e., the output of the self-attention operation is added to the original input, like it is done in the transformer model, then this additive update would essentially “mark” an unlabeled sample by the propagated class (albeit this term might have a small norm). The second self-attention layer can then be designed similarly to the supervised case. If label embeddings are orthogonal, then even a small component of a class label can be sufficient for a weight slice to attend to it thus adding its embedding to the final weight.
4 EXPERIMENTS
In this section, we present HYPERTRANSFORMER (HT) experimental results and discuss the implications of our empirical findings.
4.1 DATASETS AND SETUP
Datasets. For our experiments, we chose several most widely used few-shot datasets including OMNIGLOT, MINIIMAGENET and TIEREDIMAGENET. MINIIMAGENET contains a relatively small set of labels and is arguably the simplest to overfit to. Because of this and since in many recent publications MINIIMAGENET was replaced with a more realistic TIEREDIMAGENET datasets, we conduct many of our experiments and ablation studies using OMNIGLOT and TIEREDIMAGENET datasets.
Models. While HT can support generation of arbitrarily large weight tensors by flattening and slicing the entire weight tensor, in this work, we limit our experiments to HT models that generate slices encoding individual output channels directly. For the target models we focus on 4-layer architectures identical to those used in MAML++ and numerous other papers. Generating larger architectures such as RESNET and WIDERESNET will be the subject of our future work. More specifically, we used a sequence of four 3× 3 convolutional layers with the same number of output channels followed by batch normalization layers, nonlinearities and max-pooling stride-2 layers. All BN variables were learned and not generated4.
4.2 SUPERVISED RESULTS WITH LOGITS LAYER GENERATION
Our first experiments compared the proposed HT approach with MAML++ on OMNIGLOT, MINIIMAGENET and TIEREDIMAGENET datasets (see Table 1). Interestingly, for models that had more than 8 channels per layer, the results obtained with HT generating the final logits layer proved to be nearly identical to those where HT was used to generate all CNN layers (see Section 4.4). In our experiments the number of local features was chosen to be the same as the number of model channels and the shared feature had a dimension of 32 regardless of the model size. The shared feature extractor was a simple 4-layer convolutional model with batch normalization and stride-2 3× 3 convolutional kernels. Local feature extractors were two-layer convolutional models with outputs of both layers averaged over the spatial dimensions and concatenated to produce the final local feature. For all tasks except 5-shot MINIIMAGENET our transformer had 3 layers, used a simple sequence of encoder layers (Figure 2b-i) and used the “output allocation” of weight slices (Section 3.1). Experiments with the encoder-decoder transformer architecture can be found in Appendix D. The 5-shot MINIIMAGENET results presented in Table 1 were obtained with a simplified transformer model that had 1 layer, and did not have the final fully-connected layer and nonlinearity. This proved necessary for reducing model overfitting of this smaller dataset. Other model parameters are described in detail in Appendix B.
Results obtained with our method in a few-shot setting (see Table 1) are frequently better than MAML++ results, especially on smaller models, which can be attributed to parameter disentanglement between the weight generator and the CNN model. While the improvement over MAML++ gets smaller with the growing size of the generated CNN, our results on MINIIMAGENET and TIEREDIMAGENET appear to be comparable to those obtained with numerous other advanced methods (see Table 2). Discussion of additional comparisons to LGM-Net (Li et al., 2019b) and LEO (Rusu et al., 2019) using a different setup (which is why they could not be included in Table 2) and showing an almost identical performance can be found in Appendix C. While the learned HT model could perform a relatively simple calculation on high-dimensional sample features, perhaps not too different from that in equation 1, our brief analysis of the parameters space (see Appendix D) shows that using simpler 1-layer transformers leads to a modest decrease of the test accuracy and a greater drop in the training accuracy for smaller models. We observed that the results in Table 1 could be improved even further by increasing the feature sizes (see Appendix D), but we did not pursue an exhaustive optimization in the parameter space.
It is worth noting that overfitting leading to a good performance on tasks composed of seen categories, but poor generalization to unseen categories, may still have practical applications. Specifically, if the actual task relies on classes seen at the training time, we can generate a model customized to a particular task in a single pass without having to perform any SGD steps to fine-tune the model. This is useful if, for example, the client model needs to be adjusted to a particular set of known classes most widely used by this client. We also anticipate that with more complex data augmentations and additional synthetic tasks, more complex transformer-based models can further improve their performance on the test set and a deeper analysis of such techniques will be the subject of our future work.
3here we assume that the embeddings e are unbiased, i.e., 〈ei〉 = 0 4Experiments with generated BN variables did not show much difference with this simpler approach.
4.3 SEMI-SUPERVISED RESULTS WITH LOGITS LAYER GENERATION
In our approach, the weight generation model is trained by optimizing the loss calculated on the query set and therefore any additional information about the task, including unlabeled samples, can be provided as a part of the support set to the weight generator without having to alter the optimization objective. This allows us to tackle a semi-supervised few-shot learning problem without making any substantial changes to the model or the training approach. In our implementation, we simply added unlabeled samples into the support set and marked them with an auxiliary learned “unlabeled” token ξ̂ in place of the label encoding ξ(c).
Since OMNIGLOT is typically characterized by very high accuracies in the 97% − 99% range, we conducted all our experiments with TIEREDIMAGENET. Results of our experiments presented in Table 3 show that adding unlabeled samples leads to a substantial increase of the final test accuracy. Furthermore, notice that the model achieves its best performance when the number of transformer layers is greater than one. This is consistent with the basic mechanism discussed in Section 3.2 and requiring two self-attention layers to function.
It is worth noticing that adding more unlabeled samples into the support set makes our model more difficult to train and it gets stuck producing CNNs with essentially random outputs. Our solution was to introduce unlabeled samples incrementally during training. This was implemented by masking out some unlabeled samples in the beginning of the training and then gradually reducing the masking probability over time.
4.4 GENERATING MORE MODEL LAYERS
We demonstrated that HT model can outperform MAML++ on common few-shot learning datasets by generating just the last logits layer of the CNN model. But under what conditions can it be advantageous to generate additional CNN layers? As we show here, generating all CNN layers with a multi-layer transformer can lead to a significant performance improvement on CNN models below a particular size.
We demonstrated this by conducting experiments, in which some layers were generated and some layers were learned (usually the first few layers of the CNN). For OMNIGLOT dataset, we saw that both training and test accuracies for a 4-channel and a 6-channel CNNs increased with the number of generated layers (see Fig. 3 and Table 4 in Appendix) and using more complex transformer models with 2 or more encoder layers improved both training and test accuracies of fully-generated CNN models of this size (see Appendix D). However, as the size of the model increased and reached 8 channels, generating the last logits layer alone proved to be sufficient for getting the best results on OMNIGLOT and TIEREDIMAGENET.
The positive effect of generating convolutional layers can also be observed in shallow models with large convolutional kernels with large strides where the model performance can be much more sensitive to a proper choice of model weights. For example, in a 16-channel model with two convolutional kernels of size 9 and the stride of 4, the overall test accuracy for a model generating only the final convolutional layer was about 1% lower than the accuracies of the models generating at least one additional convolutional filter. We also speculate that as the complexity of the task increases, generating some or all intermediate network layers should become more important for achieving optimal performance. Verifying this hypothesis and understanding the “boundary” in the model space between two regimes where a static backbone is sufficient, or no longer sufficient will be the subject of our future work.
5 CONCLUSIONS
In this work, we proposed a HyperTransformer (HT), a novel transformer-based model that generates all weights of a CNN model directly from a few-shot support set. This approach allows us to use a high-capacity model for encoding task-dependent variations in the weights of a smaller model. We demonstrate that generating the last logits layer alone, the transformer-based weight generator beats or matches performance of multiple traditional learning methods on several few-shot benchmarks. More importantly, we showed that HT can be straightforwardly extended to handle unlabeled samples that might be present in the support set and our experiments demonstrate a considerable fewshot performance improvement in the presence of unlabeled data. Finally, we explore the impact of the transformer-encoded model diversity in CNN models of different sizes. We use HT to generate some or all convolutional kernels and biases and show that for sufficiently small models, adjusting all model parameters further improves their few-shot learning performance. | 1. What is the focus and contribution of the paper on few-shot image classification?
2. What are the strengths of the proposed approach, particularly in terms of increasing the learning capacity?
3. What are the weaknesses of the paper, especially regarding the novelty and evaluation of the proposed method?
4. Do you have any concerns about the presentation and clarity of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This work presents the use of a hybrid CNN-Transformer model for few-shot image classification. Specifically, the paper applies encoding and encoding-decoding transformers between convolution layers to increase the learning capacity. The paper also includes a Conv-based hybrid model and uses Omniglot, miniImageNet, and tieredImageNet for evaluations.
Review
Strengths:
– The overall idea of the paper to increase the learning capacity looks interesting.
Weaknesses:
– I think the novelty of the current version of the paper is incremental, and adding transformer layers between Conv4 would likely increase the Conv4’s capacity due to the over-parameterization.
– While respecting the effort of the author(s) on evaluation of their idea using tieredImageNet, I think we need some additional large datasets instead of Omniglot or miniImagaNet in the case of measuring the effectiveness of a model with higher capacity. Here, deep backbones (such as ResNet) are required in the evaluations. Of course, we should also consider an appropriate regularization technique while increasing the model capacity of deeper backbones.
– The presentation of the paper is not clear in some parts of the paper. For example, there are some grammatical errors in the paper (e.g. opening sentence of section 3). I also had problems with understanding some of the statements. For example, the first sentence of the second paragraph in section 1 “, relies on pretraining to a low-dimensional embedding...“. We know that some metric-based methods (like ProtoNet) can be applied to the flatten output of ResNet12 (resulting in high dimensionality) and still work well. |
ICLR | Title
HyperTransformer: Attention-Based CNN Model Generation from Few Samples
Abstract
In this work we propose a HyperTransformer, a transformer based model that generates all weights of a CNN network directly from support samples. This approach uses a high-capacity model for encoding task-dependent variations in the weights of a smaller model. We show on multiple few-shot image classification benchmarks with different model sizes and datasets that our method beats or matches the performance of many traditional learning methods. Specifically, we show that for very small target architectures, our method can generate significantly better performing models than traditional few-shot learning approaches. For larger models we discover that generating the last layer alone allows us to produce competitive or better results while being end-to-end differentiable. Finally, we extend our approach to a semi-supervised regime utilizing unlabeled samples in the support set and further improving few-shot performance in the presence of unlabeled data.
1 INTRODUCTION
In few-shot learning, a conventional machine learning paradigm of fitting a parametric model to training data is taken to a limit of extreme data scarcity where entire categories are introduced with just one or few examples. A generic approach to solving this problem uses training data to identify parameters φ of a learner aφ that given a small batch of examples for a particular task (called a support set) can solve this task on unseen data (called a query set).
One broad family of few-shot image classification methods frequently referred to as metric-based learning, relies on pretraining an embedding eφ(·) and then using some distance in the embedding space to label query samples based on their closeness to known labeled support samples. These methods proved effective on numerous benchmarks (see Tian et al. (2020) for review and references), however the capabilities of the learner are limited by the capacity of the architecture itself, as these methods try to build a universal embedding function.
On the other hand, optimization-based methods such as seminal MAML algorithm (Finn et al., 2017) can fine-tune the embedding eφ by performing additional SGD updates on all parameters φ of the model producing it. This partially addresses the constraints of metric-based methods by learning a new embedding for each new task. However, in many of these methods, all the knowledge extracted during training on different tasks and describing the learner aφ still has to “fit” into the same number of parameters as the model itself. Such limitation becomes more severe as the target models get smaller, while the richness of the task set increases.
In this paper we propose a new few-shot learning approach that allows us to decouple the complexity of the task space from the complexity of individual tasks. The main idea is to use the transformer model (Vaswani et al., 2017) that given a few-shot task episode generates an entire inference model by producing all model weights in a single pass. This allows us to encode the intricacies of the available training data inside the transformer model, while still producing specialized tiny models that can solve individual tasks. Reducing the size of the generated model and moving the computational overhead to the transformer-based weight generator, we can lower the cost of the inference on new images. This can dramatically reduce the overall computation cost in cases, where the tasks change infrequently and hence the weight generator is only used sporadically.
We start by observing that the self-attention mechanism is well suited to be an underlying mechanism for a few-shot CNN weight generator. In contrast with earlier CNN- (Zhao et al., 2020) or
BiLSTM-based approaches (Ravi & Larochelle, 2017), the vanilla1 transformer model is invariant to sample permutations and can handle unbalanced datasets with a varying number of samples per category. Furthermore, we demonstrate that a single-layer self-attention model can replicate a simplified gradient-descent-based learning algorithm. Using a transformer-based model to generate the logits layer on top of a conventionally learned embedding, we achieve competitive results on several common few-shot learning benchmarks. Varying transformer parameters we demonstrate that this high performance can be attributed to additional capacity of the transformer model that decouples its complexity from that of the generated CNN.
We then extend our method to support unlabeled samples by using a special input token concatenated to unlabeled samples to encode unknown labels. In our experiments, we observe that using transformers with two or more layers, we achieve better performance by adding unlabeled data into the support set. We explain our results in Section 4.3 where we show that such a transformer with at least two layers can encode the nearest-neighbor style algorithm that associates unlabeled samples with similarly labeled examples. In essence, by training the weight generator to produce CNN models with best possible performance on a query set, we teach the transformer to utilize unlabeled samples without having to manually introduce additional optimization objectives.
We also show that by generating all layers of the CNN model we can improve both the training and the test accuracies of CNN models below a certain size. The training accuracy can be viewed as a capability of the generated CNN model to adapt to tasks seen at training time, whereas the test accuracy computed on unseen categories characterizes the generalization capability of this model adaptation mechanism. We empirically demonstrate the expected increase of the model training and test accuracies with the increase of the layer size and the number of generated layers (see Figure 3). Interestingly, generation of the logits layer alone appears to be “sufficient” above a certain model size threshold. This threshold is expected to depend on the variability and the complexity of the training tasks. We conjecture that this might reflect the fact that models are sufficiently expressive for the benchmarks we considered, or that additional regularization is needed to prevent overfitting of the meta-learning models.
Finally, in addition to being able to decouple the complexity of the task distribution from the complexity of individual tasks, another important advantage of our method is that it allows to do learning end-to-end without relying on complex nested gradients optimization and other meta-learning approaches, where the number of unrolls steps is large.
The paper is structured as follows. In Section 2, we discuss the few-shot learning problem setup and highlight related work. Section 3 introduces our approach, discusses the motivation for choosing an attention-based model and shows how our approach can be used to meta-learn semi-supervised learning algorithms. In Section 4, we discuss our experimental results. Finally, in Section 5, we provide concluding remarks.
2 PROBLEM SETUP AND RELATED WORK
2.1 FEW-SHOT LEARNING
The main goal of a few-shot learning algorithm is to use a set of training tasks Ttrain for finding a learner aφ parameterized by φ that given new task domains can train to recognize novel classes using just a few samples per each class. The learner aφ can be thought of as a function that maps task description T = {(xi, ci)}ti=1 containing k labeled input samples {xi, ci} from n classes, to the weights θ = aφ(T ) of a trained model f(x;θ). The parameters φ are meta-optimized to maximize the performance of the model f(x; aφ(TS)) generated using a support set TS with x drawn from a query set TQ. Each task T = (TS , TQ) is randomly drawn from a space of training tasks Ttrain. Typically, TS and TQ are generated by first randomly choosing several distinct classes from the training set and then sampling examples without replacement from these classes to generate TS and TQ. In a classical “n-way-k-shot” setting, n is the number of classes randomly sampled in each episode, and k is the number of samples for each class in the support set TS .
The quality of a particular few-shot learning algorithm is typically evaluated using a separate test space of tasks Ttest. By forming Ttest from novel classes unseen at training time, we can evaluate
1without attention masking or positional encodings
generalization of different learners aφ. Best algorithms are expected to capture the structure present in the training set and to perform well on novel concepts. This structure may, for example, include certain properties of the distributions pc(x) with c being the class label, or the presence of particular discriminative (or alternatively invariant) features in the tasks from Ttrain.
2.2 RELATED WORK
Few-shot learning received a lot of attention from the deep learning community and while there are hundreds of few-shot learning methods, several common themes emerged in the past years. Here we outline several existing approaches, show how they relate to our method and discuss the prior work related to it.
Metric-Based Learning. One family of approaches involves mapping input samples into an embedding space and then using some nearest neighbor algorithm that relies on the computation of distances from a query sample embedding to the embedding computed using support samples with known labels. The metric used to compute the distance can either be the same for all tasks, or can be task-dependent. This family of methods includes, for example, such methods as Siamese networks (Koch et al., 2015), Matching Networks (Vinyals et al., 2016), Prototypical Networks (Snell et al., 2017), Relation Networks (Sung et al., 2018) and TADAM (Oreshkin et al., 2018). It has recently been argued (Tian et al., 2020) that methods based on building a powerful sample representation can frequently outperform numerous other approaches including many optimization-based methods. However, such approaches essentially amount to the “one-model solves all” approach and thus require larger models than needed to solve individual tasks.
Optimization-Based Learning. An alternative approach that can adapt the embedding to a new task is to incorporate optimization within the learning process. A variety of such methods are based on the approach called Model-Agnostic Meta-Learning, or MAML (Finn et al., 2017). In MAML, θ = aφ is obtained by initializing a DNN at θ0 = φ and then performing one or more gradient descent updates on a classification loss function L, i.e., computing θk+1 = θk − γ · (∂L/∂θ)(T ;θk). This approach was later refined (Antoniou et al., 2019) and built upon giving rise to Reptile (Nichol et al., 2018), LEO (Rusu et al., 2019) and others. One limitation of various MAML-inspired methods is that the knowledge about the set of training tasks Ttrain is distilled into parameters φ that have the same dimensionality as the model parameters θ. Therefore, for a very lightweight model f(x;θ) the capacity of the task-adaptation learner aφ is still limited by the size of θ. Methods that use parameterized preconditioners that otherwise do not impact the model f(x;θ) can alleviate this issue, but as with MAML, such methods can be difficult to train (Antoniou et al., 2019).
Weight Modulation and Generation. The idea of using a task specification to directly generate or modulate model weights has been previously explored in the generalized supervised learning context (Ratzlaff & Li, 2019) and in specific language models (Mahabadi et al., 2021; Tay et al., 2021; Ye & Ren, 2021). Some few-shot learning methods described above also employ this approach and use task-specific generation or modulation of the weights of the final classification model. For example, in LGM-Net (Li et al., 2019b) the matching network approach is used to generate a few layers on top of a task-agnostic embedding. Another approach abbreviated as LEO (Rusu et al., 2019) utilized a similar weight generation method to generate initial model weights from the training dataset in a few-shot learning setting, much like what is proposed in this article. However, in (Rusu et al., 2019), the generated weights were also refined using several SGD steps similar to how it is done in MAML. Here we explore a similar idea, but largely inspired by the HYPERNETWORK approach (Ha et al., 2017), we instead propose to directly generate an entire task-specific CNN model. Unlike LEO, we do not rely on pre-computed embeddings for images and generate the model in a single step without additional SGD steps, which simplifies and stabilizes training.
Transformers in Computer Vision and Few-Shot Learning. Transformer models (Vaswani et al., 2017) originally proposed for natural language understanding applications had since become a useful tool in practically every subfield of deep learning. In computer vision, transformers have recently seen an explosion of applications ranging from state-of-the-art image classification results (Dosovitskiy et al., 2021; Touvron et al., 2021) to object detection (Carion et al., 2020; Zhu et al., 2021), segmentation (Ye et al., 2019), image super-resolution (Yang et al., 2020), image generation
(Chen et al., 2021) and many others. There are also several notable applications in few-shot image classification. For example, in Liu et al. (2021), the transformer model was used for generating universal representations in the multi-domain few-shot learning scenario. And closely related to our approach, in Ye et al. (2020), the authors proposed to accomplish embedding adaptation with the help of transformer models. Unlike our method that generates an entire end-to-end image classification model, this approach applied a task-dependent perturbation to an embedding generated by an independent task-agnostic feature extractor. In (Gidaris & Komodakis, 2018), a simplified attention-based model was used for the final layer generation.
3 OUR APPROACH
In this section, we describe our approach to few-shot learning that we call a HYPERTRANSFORMER (HT) and justify the choice of the self-attention mechanism as its basis.
3.1 FEW-SHOT LEARNING MODEL
A learner aφ (as introduced in Section 2.1) is the core of a few-shot learning algorithm and in this paper, we choose aφ to be a transformer-based model that takes a task description T = {(xi, ci)}ti=1 as input and produces weights for some or all layers {θ`|` ∈ [1, L]} of the generated CNN model. For layers with non-generated weights, they are learned in the end-to-end fashion as ordinary taskagnostic variables. In our experiments generated CNN models contain a set of convolutional layers and a final fully-connected logits layer. Here θ` are the parameters of the `-th layer and L is the total number of layers including the final logits layer (with index L). The weights are generated layer by layer starting from the first layer: θ1(T ) → θ2(θ1;T ) → · · · → θL(θ1,...,L−1;T ). Here we use θa,...,b as a short notation for (θa, θa+1, . . . , θb).
Shared and local features. The local features at layer ` are produced by a convolutional feature extractor h`φl(z ` i ) applied to the activations of the previous layer z ` i := f`−1(xi; θ1,...,`−1) for ` > 1 and z1i := xi. In other words, the transformer for layer ` receives
I` := {(
sφs(xi), h ` φl (f`−1(xi, θ1,...,`−1)), ci )} i=1,...,n .
The intuition behind the local feature extractor is that the choice of the layer weights should primarily depend on the inputs received by this layer. The shared features, on the other hand, are the same
for all layers and are produced by a separate trainable convolutional neural network sφs(xi). Their purpose is to modulate each layer’s weight generator with a global high-level sample embedding that, unlike the local embedding, is independent of the generated weights and is also fully shared between all layer generators.
Encoding and decoding transformer inputs and outputs. In our experiments we considered several different architectures of the transformer model, different ways of feeding I` into it and different ways of converting transformer outputs into the CNN weights.
Input samples were encoded by concatenating local and shared features from I` to trainable label embeddings ξ(c) with ξ : [1, n]→ Rd. Here n is the number of classes per episode and d is a chosen size of the label encoding. Note that the label embeddings do not contain semantic information, but rather act as placeholders to differentiate between distinct classes.
Along with the input samples, the sequence passed to the transformer was also populated with special learnable placeholder tokens2, each associated with a particular slice of the to-be-generated weight tensor. After the entire input sequence was processed by the transformer, we read out model outputs associated with the weight slice placeholder tokens and assembled output weight slices into the final weight tensors (see Fig. 2). In our experiments we also considered two different ways of encoding k × k × ninput × noutput convolutional kernels: (a) generating noutput weight slices with each output token having a dimension of k2 × ninput (we call it “output allocation”), (b) k2 weight slices of size ninput×noutput (“spatial allocation”). We show these results in Supplementary Materials.
Transformer model. The input discussed above was passed through a sequence of transformer encoder layers and the output weight tokens were then concatenated into a full weight tensor (see Fig. 2). Experiments with alternative architectures employing transformer decoders are discussed in Supplementary Materials.
Training the model. The weight generation model uses the support set to produce the weights of some or all CNN model layers. This generated CNN model then processes the query set samples and the cross-entropy classification loss is computed. The weight generation parameters φ are then learned by optimizing this loss function using stochastic gradient descent.
3.2 REASONING BEHIND THE SELF-ATTENTION MECHANISM
The choice of self-attention mechanism for the weight generator is not arbitrary. One motivating reason behind this choice is that the output produced by generator with the basic self-attention is by design invariant to input permutations, i.e., permutations of samples in the training dataset. This also makes it suitable for processing unbalanced batches and batches with a variable number of samples (see Sec. 4.3). Now we show that self-attention can express several intuitive algorithms, thus further motivating its utility.
2each token is a learnable d-dimensional vector padded with zeros to the size of the input sample token
Supervised learning. Self-attention in its rudimentary form can implement a method similar to cosine-similarity-based sample weighting encoded in the logits layer3 with weightsW :
Wij ∼ n∑
m=1
y (m) i e (m) j , (1)
which can also be viewed as a result of applying a single gradient descent step on the cross-entropy loss (see Appendix A). Here n is the total number of support-set samples {x(m)|m ∈ [1, n]} and e(m), y(m) are the embedding vector and the one-hot label corresponding to x(m).
The approach can be outlined (see more details in Appendix A) as follows. The self-attention operation receives encoded input samples Ik = (ξ(ck), ek) and weight placeholders (µ(i), 0) as its input. If each weight slice Wi,· represented by a particular token (µ(i), 0) produces a query Qi that only attends to keys Kk corresponding to samples Ik with labels ck matching i and the values of these samples are set to their embeddings ek, then the self-attention operation will essentially average the embeddings of all samples assigned label i thus matching the first term inW in equation 1.
Semi-supervised learning. A similar self-attention mechanism can also be designed to produce logits layer weights when the support set contains some unlabeled samples. The mechanism first propagates classes of labeled samples to similar unlabeled samples. This can be achieved by choosing the queries and the keys of the samples to be proportional to their embeddings. The attention map for sample i would then be defined by a softmax of ei · ej , or in other words would be proportional to exp(ei · ej). Choosing sample values to be proportional to the class tokens, we can then propagate a class of a labeled sample ej to a nearby unlabeled sample with embedding ei, for which ei · ej is sufficiently large. If the self-attention module is “residual”, i.e., the output of the self-attention operation is added to the original input, like it is done in the transformer model, then this additive update would essentially “mark” an unlabeled sample by the propagated class (albeit this term might have a small norm). The second self-attention layer can then be designed similarly to the supervised case. If label embeddings are orthogonal, then even a small component of a class label can be sufficient for a weight slice to attend to it thus adding its embedding to the final weight.
4 EXPERIMENTS
In this section, we present HYPERTRANSFORMER (HT) experimental results and discuss the implications of our empirical findings.
4.1 DATASETS AND SETUP
Datasets. For our experiments, we chose several most widely used few-shot datasets including OMNIGLOT, MINIIMAGENET and TIEREDIMAGENET. MINIIMAGENET contains a relatively small set of labels and is arguably the simplest to overfit to. Because of this and since in many recent publications MINIIMAGENET was replaced with a more realistic TIEREDIMAGENET datasets, we conduct many of our experiments and ablation studies using OMNIGLOT and TIEREDIMAGENET datasets.
Models. While HT can support generation of arbitrarily large weight tensors by flattening and slicing the entire weight tensor, in this work, we limit our experiments to HT models that generate slices encoding individual output channels directly. For the target models we focus on 4-layer architectures identical to those used in MAML++ and numerous other papers. Generating larger architectures such as RESNET and WIDERESNET will be the subject of our future work. More specifically, we used a sequence of four 3× 3 convolutional layers with the same number of output channels followed by batch normalization layers, nonlinearities and max-pooling stride-2 layers. All BN variables were learned and not generated4.
4.2 SUPERVISED RESULTS WITH LOGITS LAYER GENERATION
Our first experiments compared the proposed HT approach with MAML++ on OMNIGLOT, MINIIMAGENET and TIEREDIMAGENET datasets (see Table 1). Interestingly, for models that had more than 8 channels per layer, the results obtained with HT generating the final logits layer proved to be nearly identical to those where HT was used to generate all CNN layers (see Section 4.4). In our experiments the number of local features was chosen to be the same as the number of model channels and the shared feature had a dimension of 32 regardless of the model size. The shared feature extractor was a simple 4-layer convolutional model with batch normalization and stride-2 3× 3 convolutional kernels. Local feature extractors were two-layer convolutional models with outputs of both layers averaged over the spatial dimensions and concatenated to produce the final local feature. For all tasks except 5-shot MINIIMAGENET our transformer had 3 layers, used a simple sequence of encoder layers (Figure 2b-i) and used the “output allocation” of weight slices (Section 3.1). Experiments with the encoder-decoder transformer architecture can be found in Appendix D. The 5-shot MINIIMAGENET results presented in Table 1 were obtained with a simplified transformer model that had 1 layer, and did not have the final fully-connected layer and nonlinearity. This proved necessary for reducing model overfitting of this smaller dataset. Other model parameters are described in detail in Appendix B.
Results obtained with our method in a few-shot setting (see Table 1) are frequently better than MAML++ results, especially on smaller models, which can be attributed to parameter disentanglement between the weight generator and the CNN model. While the improvement over MAML++ gets smaller with the growing size of the generated CNN, our results on MINIIMAGENET and TIEREDIMAGENET appear to be comparable to those obtained with numerous other advanced methods (see Table 2). Discussion of additional comparisons to LGM-Net (Li et al., 2019b) and LEO (Rusu et al., 2019) using a different setup (which is why they could not be included in Table 2) and showing an almost identical performance can be found in Appendix C. While the learned HT model could perform a relatively simple calculation on high-dimensional sample features, perhaps not too different from that in equation 1, our brief analysis of the parameters space (see Appendix D) shows that using simpler 1-layer transformers leads to a modest decrease of the test accuracy and a greater drop in the training accuracy for smaller models. We observed that the results in Table 1 could be improved even further by increasing the feature sizes (see Appendix D), but we did not pursue an exhaustive optimization in the parameter space.
It is worth noting that overfitting leading to a good performance on tasks composed of seen categories, but poor generalization to unseen categories, may still have practical applications. Specifically, if the actual task relies on classes seen at the training time, we can generate a model customized to a particular task in a single pass without having to perform any SGD steps to fine-tune the model. This is useful if, for example, the client model needs to be adjusted to a particular set of known classes most widely used by this client. We also anticipate that with more complex data augmentations and additional synthetic tasks, more complex transformer-based models can further improve their performance on the test set and a deeper analysis of such techniques will be the subject of our future work.
3here we assume that the embeddings e are unbiased, i.e., 〈ei〉 = 0 4Experiments with generated BN variables did not show much difference with this simpler approach.
4.3 SEMI-SUPERVISED RESULTS WITH LOGITS LAYER GENERATION
In our approach, the weight generation model is trained by optimizing the loss calculated on the query set and therefore any additional information about the task, including unlabeled samples, can be provided as a part of the support set to the weight generator without having to alter the optimization objective. This allows us to tackle a semi-supervised few-shot learning problem without making any substantial changes to the model or the training approach. In our implementation, we simply added unlabeled samples into the support set and marked them with an auxiliary learned “unlabeled” token ξ̂ in place of the label encoding ξ(c).
Since OMNIGLOT is typically characterized by very high accuracies in the 97% − 99% range, we conducted all our experiments with TIEREDIMAGENET. Results of our experiments presented in Table 3 show that adding unlabeled samples leads to a substantial increase of the final test accuracy. Furthermore, notice that the model achieves its best performance when the number of transformer layers is greater than one. This is consistent with the basic mechanism discussed in Section 3.2 and requiring two self-attention layers to function.
It is worth noticing that adding more unlabeled samples into the support set makes our model more difficult to train and it gets stuck producing CNNs with essentially random outputs. Our solution was to introduce unlabeled samples incrementally during training. This was implemented by masking out some unlabeled samples in the beginning of the training and then gradually reducing the masking probability over time.
4.4 GENERATING MORE MODEL LAYERS
We demonstrated that HT model can outperform MAML++ on common few-shot learning datasets by generating just the last logits layer of the CNN model. But under what conditions can it be advantageous to generate additional CNN layers? As we show here, generating all CNN layers with a multi-layer transformer can lead to a significant performance improvement on CNN models below a particular size.
We demonstrated this by conducting experiments, in which some layers were generated and some layers were learned (usually the first few layers of the CNN). For OMNIGLOT dataset, we saw that both training and test accuracies for a 4-channel and a 6-channel CNNs increased with the number of generated layers (see Fig. 3 and Table 4 in Appendix) and using more complex transformer models with 2 or more encoder layers improved both training and test accuracies of fully-generated CNN models of this size (see Appendix D). However, as the size of the model increased and reached 8 channels, generating the last logits layer alone proved to be sufficient for getting the best results on OMNIGLOT and TIEREDIMAGENET.
The positive effect of generating convolutional layers can also be observed in shallow models with large convolutional kernels with large strides where the model performance can be much more sensitive to a proper choice of model weights. For example, in a 16-channel model with two convolutional kernels of size 9 and the stride of 4, the overall test accuracy for a model generating only the final convolutional layer was about 1% lower than the accuracies of the models generating at least one additional convolutional filter. We also speculate that as the complexity of the task increases, generating some or all intermediate network layers should become more important for achieving optimal performance. Verifying this hypothesis and understanding the “boundary” in the model space between two regimes where a static backbone is sufficient, or no longer sufficient will be the subject of our future work.
5 CONCLUSIONS
In this work, we proposed a HyperTransformer (HT), a novel transformer-based model that generates all weights of a CNN model directly from a few-shot support set. This approach allows us to use a high-capacity model for encoding task-dependent variations in the weights of a smaller model. We demonstrate that generating the last logits layer alone, the transformer-based weight generator beats or matches performance of multiple traditional learning methods on several few-shot benchmarks. More importantly, we showed that HT can be straightforwardly extended to handle unlabeled samples that might be present in the support set and our experiments demonstrate a considerable fewshot performance improvement in the presence of unlabeled data. Finally, we explore the impact of the transformer-encoded model diversity in CNN models of different sizes. We use HT to generate some or all convolutional kernels and biases and show that for sufficiently small models, adjusting all model parameters further improves their few-shot learning performance. | 1. What is the focus of the paper regarding few-shot image classification?
2. What are the strengths and advantages of the proposed method compared to existing approaches?
3. What are the weaknesses of the paper, particularly regarding performance and model complexity?
4. How does the reviewer assess the novelty and insightfulness of the paper's contributions?
5. Are there any concerns or suggestions regarding the clarity and focus of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a method for solving few-shot image classification that generates all the weights of a very small CNN model. This has the advantage that the generated model can be very small+compact, compared against for example some embedding methods that might require large image classification networks to run as a pre-process. The method is also able to handle unlabeled samples in a fairly natural way.
Review
Overall I think the method is fairly interesting and a useful contribution to the "small model" few-shot image classification problem. I think the main weakness of the problem is that peformance seems fairly comparable to existing methods if one is willing to use a "test time" model of any model complexity. While there are several "meta network" approaches that directly generate the weights of another, simpler network, I think there are a few useful/generalizable insights that the paper presents that will be of use to other methods; in particular, the two-stage transformer visualized in Fig 1. The separation into a high-capacity model for parsing the support set, and a low-capacity model for executing, is a natural one for this problem. However in most practical cases I have worked with the cost of a larger backbone network is fairly minimal on modern hardware, it's still nice to have smaller models where possible and the paper demontrates that this can be achieved with comparable accuracy.
When the paper initially claims "the transformer model is invariant to sample permutations" I was slightly confused as this depends on how exactly the transformers are used (in NLP the token order is certainly relevant.) Some earlier clarity about how the paper intends to use transformers would help here.
Reasserting early on that this is focused entirely on few-shot image classification will also help here, there are plenty of other relevant few shot problems (value regression, image regression/translation, document classification, and so on.) The paper doesn't clarify this at all in the abstract or early part of the intro.
I am familiar with this general topic but as it is not my primary area of research, I may have missed some prior work approach here. |
ICLR | Title
HyperTransformer: Attention-Based CNN Model Generation from Few Samples
Abstract
In this work we propose a HyperTransformer, a transformer based model that generates all weights of a CNN network directly from support samples. This approach uses a high-capacity model for encoding task-dependent variations in the weights of a smaller model. We show on multiple few-shot image classification benchmarks with different model sizes and datasets that our method beats or matches the performance of many traditional learning methods. Specifically, we show that for very small target architectures, our method can generate significantly better performing models than traditional few-shot learning approaches. For larger models we discover that generating the last layer alone allows us to produce competitive or better results while being end-to-end differentiable. Finally, we extend our approach to a semi-supervised regime utilizing unlabeled samples in the support set and further improving few-shot performance in the presence of unlabeled data.
1 INTRODUCTION
In few-shot learning, a conventional machine learning paradigm of fitting a parametric model to training data is taken to a limit of extreme data scarcity where entire categories are introduced with just one or few examples. A generic approach to solving this problem uses training data to identify parameters φ of a learner aφ that given a small batch of examples for a particular task (called a support set) can solve this task on unseen data (called a query set).
One broad family of few-shot image classification methods frequently referred to as metric-based learning, relies on pretraining an embedding eφ(·) and then using some distance in the embedding space to label query samples based on their closeness to known labeled support samples. These methods proved effective on numerous benchmarks (see Tian et al. (2020) for review and references), however the capabilities of the learner are limited by the capacity of the architecture itself, as these methods try to build a universal embedding function.
On the other hand, optimization-based methods such as seminal MAML algorithm (Finn et al., 2017) can fine-tune the embedding eφ by performing additional SGD updates on all parameters φ of the model producing it. This partially addresses the constraints of metric-based methods by learning a new embedding for each new task. However, in many of these methods, all the knowledge extracted during training on different tasks and describing the learner aφ still has to “fit” into the same number of parameters as the model itself. Such limitation becomes more severe as the target models get smaller, while the richness of the task set increases.
In this paper we propose a new few-shot learning approach that allows us to decouple the complexity of the task space from the complexity of individual tasks. The main idea is to use the transformer model (Vaswani et al., 2017) that given a few-shot task episode generates an entire inference model by producing all model weights in a single pass. This allows us to encode the intricacies of the available training data inside the transformer model, while still producing specialized tiny models that can solve individual tasks. Reducing the size of the generated model and moving the computational overhead to the transformer-based weight generator, we can lower the cost of the inference on new images. This can dramatically reduce the overall computation cost in cases, where the tasks change infrequently and hence the weight generator is only used sporadically.
We start by observing that the self-attention mechanism is well suited to be an underlying mechanism for a few-shot CNN weight generator. In contrast with earlier CNN- (Zhao et al., 2020) or
BiLSTM-based approaches (Ravi & Larochelle, 2017), the vanilla1 transformer model is invariant to sample permutations and can handle unbalanced datasets with a varying number of samples per category. Furthermore, we demonstrate that a single-layer self-attention model can replicate a simplified gradient-descent-based learning algorithm. Using a transformer-based model to generate the logits layer on top of a conventionally learned embedding, we achieve competitive results on several common few-shot learning benchmarks. Varying transformer parameters we demonstrate that this high performance can be attributed to additional capacity of the transformer model that decouples its complexity from that of the generated CNN.
We then extend our method to support unlabeled samples by using a special input token concatenated to unlabeled samples to encode unknown labels. In our experiments, we observe that using transformers with two or more layers, we achieve better performance by adding unlabeled data into the support set. We explain our results in Section 4.3 where we show that such a transformer with at least two layers can encode the nearest-neighbor style algorithm that associates unlabeled samples with similarly labeled examples. In essence, by training the weight generator to produce CNN models with best possible performance on a query set, we teach the transformer to utilize unlabeled samples without having to manually introduce additional optimization objectives.
We also show that by generating all layers of the CNN model we can improve both the training and the test accuracies of CNN models below a certain size. The training accuracy can be viewed as a capability of the generated CNN model to adapt to tasks seen at training time, whereas the test accuracy computed on unseen categories characterizes the generalization capability of this model adaptation mechanism. We empirically demonstrate the expected increase of the model training and test accuracies with the increase of the layer size and the number of generated layers (see Figure 3). Interestingly, generation of the logits layer alone appears to be “sufficient” above a certain model size threshold. This threshold is expected to depend on the variability and the complexity of the training tasks. We conjecture that this might reflect the fact that models are sufficiently expressive for the benchmarks we considered, or that additional regularization is needed to prevent overfitting of the meta-learning models.
Finally, in addition to being able to decouple the complexity of the task distribution from the complexity of individual tasks, another important advantage of our method is that it allows to do learning end-to-end without relying on complex nested gradients optimization and other meta-learning approaches, where the number of unrolls steps is large.
The paper is structured as follows. In Section 2, we discuss the few-shot learning problem setup and highlight related work. Section 3 introduces our approach, discusses the motivation for choosing an attention-based model and shows how our approach can be used to meta-learn semi-supervised learning algorithms. In Section 4, we discuss our experimental results. Finally, in Section 5, we provide concluding remarks.
2 PROBLEM SETUP AND RELATED WORK
2.1 FEW-SHOT LEARNING
The main goal of a few-shot learning algorithm is to use a set of training tasks Ttrain for finding a learner aφ parameterized by φ that given new task domains can train to recognize novel classes using just a few samples per each class. The learner aφ can be thought of as a function that maps task description T = {(xi, ci)}ti=1 containing k labeled input samples {xi, ci} from n classes, to the weights θ = aφ(T ) of a trained model f(x;θ). The parameters φ are meta-optimized to maximize the performance of the model f(x; aφ(TS)) generated using a support set TS with x drawn from a query set TQ. Each task T = (TS , TQ) is randomly drawn from a space of training tasks Ttrain. Typically, TS and TQ are generated by first randomly choosing several distinct classes from the training set and then sampling examples without replacement from these classes to generate TS and TQ. In a classical “n-way-k-shot” setting, n is the number of classes randomly sampled in each episode, and k is the number of samples for each class in the support set TS .
The quality of a particular few-shot learning algorithm is typically evaluated using a separate test space of tasks Ttest. By forming Ttest from novel classes unseen at training time, we can evaluate
1without attention masking or positional encodings
generalization of different learners aφ. Best algorithms are expected to capture the structure present in the training set and to perform well on novel concepts. This structure may, for example, include certain properties of the distributions pc(x) with c being the class label, or the presence of particular discriminative (or alternatively invariant) features in the tasks from Ttrain.
2.2 RELATED WORK
Few-shot learning received a lot of attention from the deep learning community and while there are hundreds of few-shot learning methods, several common themes emerged in the past years. Here we outline several existing approaches, show how they relate to our method and discuss the prior work related to it.
Metric-Based Learning. One family of approaches involves mapping input samples into an embedding space and then using some nearest neighbor algorithm that relies on the computation of distances from a query sample embedding to the embedding computed using support samples with known labels. The metric used to compute the distance can either be the same for all tasks, or can be task-dependent. This family of methods includes, for example, such methods as Siamese networks (Koch et al., 2015), Matching Networks (Vinyals et al., 2016), Prototypical Networks (Snell et al., 2017), Relation Networks (Sung et al., 2018) and TADAM (Oreshkin et al., 2018). It has recently been argued (Tian et al., 2020) that methods based on building a powerful sample representation can frequently outperform numerous other approaches including many optimization-based methods. However, such approaches essentially amount to the “one-model solves all” approach and thus require larger models than needed to solve individual tasks.
Optimization-Based Learning. An alternative approach that can adapt the embedding to a new task is to incorporate optimization within the learning process. A variety of such methods are based on the approach called Model-Agnostic Meta-Learning, or MAML (Finn et al., 2017). In MAML, θ = aφ is obtained by initializing a DNN at θ0 = φ and then performing one or more gradient descent updates on a classification loss function L, i.e., computing θk+1 = θk − γ · (∂L/∂θ)(T ;θk). This approach was later refined (Antoniou et al., 2019) and built upon giving rise to Reptile (Nichol et al., 2018), LEO (Rusu et al., 2019) and others. One limitation of various MAML-inspired methods is that the knowledge about the set of training tasks Ttrain is distilled into parameters φ that have the same dimensionality as the model parameters θ. Therefore, for a very lightweight model f(x;θ) the capacity of the task-adaptation learner aφ is still limited by the size of θ. Methods that use parameterized preconditioners that otherwise do not impact the model f(x;θ) can alleviate this issue, but as with MAML, such methods can be difficult to train (Antoniou et al., 2019).
Weight Modulation and Generation. The idea of using a task specification to directly generate or modulate model weights has been previously explored in the generalized supervised learning context (Ratzlaff & Li, 2019) and in specific language models (Mahabadi et al., 2021; Tay et al., 2021; Ye & Ren, 2021). Some few-shot learning methods described above also employ this approach and use task-specific generation or modulation of the weights of the final classification model. For example, in LGM-Net (Li et al., 2019b) the matching network approach is used to generate a few layers on top of a task-agnostic embedding. Another approach abbreviated as LEO (Rusu et al., 2019) utilized a similar weight generation method to generate initial model weights from the training dataset in a few-shot learning setting, much like what is proposed in this article. However, in (Rusu et al., 2019), the generated weights were also refined using several SGD steps similar to how it is done in MAML. Here we explore a similar idea, but largely inspired by the HYPERNETWORK approach (Ha et al., 2017), we instead propose to directly generate an entire task-specific CNN model. Unlike LEO, we do not rely on pre-computed embeddings for images and generate the model in a single step without additional SGD steps, which simplifies and stabilizes training.
Transformers in Computer Vision and Few-Shot Learning. Transformer models (Vaswani et al., 2017) originally proposed for natural language understanding applications had since become a useful tool in practically every subfield of deep learning. In computer vision, transformers have recently seen an explosion of applications ranging from state-of-the-art image classification results (Dosovitskiy et al., 2021; Touvron et al., 2021) to object detection (Carion et al., 2020; Zhu et al., 2021), segmentation (Ye et al., 2019), image super-resolution (Yang et al., 2020), image generation
(Chen et al., 2021) and many others. There are also several notable applications in few-shot image classification. For example, in Liu et al. (2021), the transformer model was used for generating universal representations in the multi-domain few-shot learning scenario. And closely related to our approach, in Ye et al. (2020), the authors proposed to accomplish embedding adaptation with the help of transformer models. Unlike our method that generates an entire end-to-end image classification model, this approach applied a task-dependent perturbation to an embedding generated by an independent task-agnostic feature extractor. In (Gidaris & Komodakis, 2018), a simplified attention-based model was used for the final layer generation.
3 OUR APPROACH
In this section, we describe our approach to few-shot learning that we call a HYPERTRANSFORMER (HT) and justify the choice of the self-attention mechanism as its basis.
3.1 FEW-SHOT LEARNING MODEL
A learner aφ (as introduced in Section 2.1) is the core of a few-shot learning algorithm and in this paper, we choose aφ to be a transformer-based model that takes a task description T = {(xi, ci)}ti=1 as input and produces weights for some or all layers {θ`|` ∈ [1, L]} of the generated CNN model. For layers with non-generated weights, they are learned in the end-to-end fashion as ordinary taskagnostic variables. In our experiments generated CNN models contain a set of convolutional layers and a final fully-connected logits layer. Here θ` are the parameters of the `-th layer and L is the total number of layers including the final logits layer (with index L). The weights are generated layer by layer starting from the first layer: θ1(T ) → θ2(θ1;T ) → · · · → θL(θ1,...,L−1;T ). Here we use θa,...,b as a short notation for (θa, θa+1, . . . , θb).
Shared and local features. The local features at layer ` are produced by a convolutional feature extractor h`φl(z ` i ) applied to the activations of the previous layer z ` i := f`−1(xi; θ1,...,`−1) for ` > 1 and z1i := xi. In other words, the transformer for layer ` receives
I` := {(
sφs(xi), h ` φl (f`−1(xi, θ1,...,`−1)), ci )} i=1,...,n .
The intuition behind the local feature extractor is that the choice of the layer weights should primarily depend on the inputs received by this layer. The shared features, on the other hand, are the same
for all layers and are produced by a separate trainable convolutional neural network sφs(xi). Their purpose is to modulate each layer’s weight generator with a global high-level sample embedding that, unlike the local embedding, is independent of the generated weights and is also fully shared between all layer generators.
Encoding and decoding transformer inputs and outputs. In our experiments we considered several different architectures of the transformer model, different ways of feeding I` into it and different ways of converting transformer outputs into the CNN weights.
Input samples were encoded by concatenating local and shared features from I` to trainable label embeddings ξ(c) with ξ : [1, n]→ Rd. Here n is the number of classes per episode and d is a chosen size of the label encoding. Note that the label embeddings do not contain semantic information, but rather act as placeholders to differentiate between distinct classes.
Along with the input samples, the sequence passed to the transformer was also populated with special learnable placeholder tokens2, each associated with a particular slice of the to-be-generated weight tensor. After the entire input sequence was processed by the transformer, we read out model outputs associated with the weight slice placeholder tokens and assembled output weight slices into the final weight tensors (see Fig. 2). In our experiments we also considered two different ways of encoding k × k × ninput × noutput convolutional kernels: (a) generating noutput weight slices with each output token having a dimension of k2 × ninput (we call it “output allocation”), (b) k2 weight slices of size ninput×noutput (“spatial allocation”). We show these results in Supplementary Materials.
Transformer model. The input discussed above was passed through a sequence of transformer encoder layers and the output weight tokens were then concatenated into a full weight tensor (see Fig. 2). Experiments with alternative architectures employing transformer decoders are discussed in Supplementary Materials.
Training the model. The weight generation model uses the support set to produce the weights of some or all CNN model layers. This generated CNN model then processes the query set samples and the cross-entropy classification loss is computed. The weight generation parameters φ are then learned by optimizing this loss function using stochastic gradient descent.
3.2 REASONING BEHIND THE SELF-ATTENTION MECHANISM
The choice of self-attention mechanism for the weight generator is not arbitrary. One motivating reason behind this choice is that the output produced by generator with the basic self-attention is by design invariant to input permutations, i.e., permutations of samples in the training dataset. This also makes it suitable for processing unbalanced batches and batches with a variable number of samples (see Sec. 4.3). Now we show that self-attention can express several intuitive algorithms, thus further motivating its utility.
2each token is a learnable d-dimensional vector padded with zeros to the size of the input sample token
Supervised learning. Self-attention in its rudimentary form can implement a method similar to cosine-similarity-based sample weighting encoded in the logits layer3 with weightsW :
Wij ∼ n∑
m=1
y (m) i e (m) j , (1)
which can also be viewed as a result of applying a single gradient descent step on the cross-entropy loss (see Appendix A). Here n is the total number of support-set samples {x(m)|m ∈ [1, n]} and e(m), y(m) are the embedding vector and the one-hot label corresponding to x(m).
The approach can be outlined (see more details in Appendix A) as follows. The self-attention operation receives encoded input samples Ik = (ξ(ck), ek) and weight placeholders (µ(i), 0) as its input. If each weight slice Wi,· represented by a particular token (µ(i), 0) produces a query Qi that only attends to keys Kk corresponding to samples Ik with labels ck matching i and the values of these samples are set to their embeddings ek, then the self-attention operation will essentially average the embeddings of all samples assigned label i thus matching the first term inW in equation 1.
Semi-supervised learning. A similar self-attention mechanism can also be designed to produce logits layer weights when the support set contains some unlabeled samples. The mechanism first propagates classes of labeled samples to similar unlabeled samples. This can be achieved by choosing the queries and the keys of the samples to be proportional to their embeddings. The attention map for sample i would then be defined by a softmax of ei · ej , or in other words would be proportional to exp(ei · ej). Choosing sample values to be proportional to the class tokens, we can then propagate a class of a labeled sample ej to a nearby unlabeled sample with embedding ei, for which ei · ej is sufficiently large. If the self-attention module is “residual”, i.e., the output of the self-attention operation is added to the original input, like it is done in the transformer model, then this additive update would essentially “mark” an unlabeled sample by the propagated class (albeit this term might have a small norm). The second self-attention layer can then be designed similarly to the supervised case. If label embeddings are orthogonal, then even a small component of a class label can be sufficient for a weight slice to attend to it thus adding its embedding to the final weight.
4 EXPERIMENTS
In this section, we present HYPERTRANSFORMER (HT) experimental results and discuss the implications of our empirical findings.
4.1 DATASETS AND SETUP
Datasets. For our experiments, we chose several most widely used few-shot datasets including OMNIGLOT, MINIIMAGENET and TIEREDIMAGENET. MINIIMAGENET contains a relatively small set of labels and is arguably the simplest to overfit to. Because of this and since in many recent publications MINIIMAGENET was replaced with a more realistic TIEREDIMAGENET datasets, we conduct many of our experiments and ablation studies using OMNIGLOT and TIEREDIMAGENET datasets.
Models. While HT can support generation of arbitrarily large weight tensors by flattening and slicing the entire weight tensor, in this work, we limit our experiments to HT models that generate slices encoding individual output channels directly. For the target models we focus on 4-layer architectures identical to those used in MAML++ and numerous other papers. Generating larger architectures such as RESNET and WIDERESNET will be the subject of our future work. More specifically, we used a sequence of four 3× 3 convolutional layers with the same number of output channels followed by batch normalization layers, nonlinearities and max-pooling stride-2 layers. All BN variables were learned and not generated4.
4.2 SUPERVISED RESULTS WITH LOGITS LAYER GENERATION
Our first experiments compared the proposed HT approach with MAML++ on OMNIGLOT, MINIIMAGENET and TIEREDIMAGENET datasets (see Table 1). Interestingly, for models that had more than 8 channels per layer, the results obtained with HT generating the final logits layer proved to be nearly identical to those where HT was used to generate all CNN layers (see Section 4.4). In our experiments the number of local features was chosen to be the same as the number of model channels and the shared feature had a dimension of 32 regardless of the model size. The shared feature extractor was a simple 4-layer convolutional model with batch normalization and stride-2 3× 3 convolutional kernels. Local feature extractors were two-layer convolutional models with outputs of both layers averaged over the spatial dimensions and concatenated to produce the final local feature. For all tasks except 5-shot MINIIMAGENET our transformer had 3 layers, used a simple sequence of encoder layers (Figure 2b-i) and used the “output allocation” of weight slices (Section 3.1). Experiments with the encoder-decoder transformer architecture can be found in Appendix D. The 5-shot MINIIMAGENET results presented in Table 1 were obtained with a simplified transformer model that had 1 layer, and did not have the final fully-connected layer and nonlinearity. This proved necessary for reducing model overfitting of this smaller dataset. Other model parameters are described in detail in Appendix B.
Results obtained with our method in a few-shot setting (see Table 1) are frequently better than MAML++ results, especially on smaller models, which can be attributed to parameter disentanglement between the weight generator and the CNN model. While the improvement over MAML++ gets smaller with the growing size of the generated CNN, our results on MINIIMAGENET and TIEREDIMAGENET appear to be comparable to those obtained with numerous other advanced methods (see Table 2). Discussion of additional comparisons to LGM-Net (Li et al., 2019b) and LEO (Rusu et al., 2019) using a different setup (which is why they could not be included in Table 2) and showing an almost identical performance can be found in Appendix C. While the learned HT model could perform a relatively simple calculation on high-dimensional sample features, perhaps not too different from that in equation 1, our brief analysis of the parameters space (see Appendix D) shows that using simpler 1-layer transformers leads to a modest decrease of the test accuracy and a greater drop in the training accuracy for smaller models. We observed that the results in Table 1 could be improved even further by increasing the feature sizes (see Appendix D), but we did not pursue an exhaustive optimization in the parameter space.
It is worth noting that overfitting leading to a good performance on tasks composed of seen categories, but poor generalization to unseen categories, may still have practical applications. Specifically, if the actual task relies on classes seen at the training time, we can generate a model customized to a particular task in a single pass without having to perform any SGD steps to fine-tune the model. This is useful if, for example, the client model needs to be adjusted to a particular set of known classes most widely used by this client. We also anticipate that with more complex data augmentations and additional synthetic tasks, more complex transformer-based models can further improve their performance on the test set and a deeper analysis of such techniques will be the subject of our future work.
3here we assume that the embeddings e are unbiased, i.e., 〈ei〉 = 0 4Experiments with generated BN variables did not show much difference with this simpler approach.
4.3 SEMI-SUPERVISED RESULTS WITH LOGITS LAYER GENERATION
In our approach, the weight generation model is trained by optimizing the loss calculated on the query set and therefore any additional information about the task, including unlabeled samples, can be provided as a part of the support set to the weight generator without having to alter the optimization objective. This allows us to tackle a semi-supervised few-shot learning problem without making any substantial changes to the model or the training approach. In our implementation, we simply added unlabeled samples into the support set and marked them with an auxiliary learned “unlabeled” token ξ̂ in place of the label encoding ξ(c).
Since OMNIGLOT is typically characterized by very high accuracies in the 97% − 99% range, we conducted all our experiments with TIEREDIMAGENET. Results of our experiments presented in Table 3 show that adding unlabeled samples leads to a substantial increase of the final test accuracy. Furthermore, notice that the model achieves its best performance when the number of transformer layers is greater than one. This is consistent with the basic mechanism discussed in Section 3.2 and requiring two self-attention layers to function.
It is worth noticing that adding more unlabeled samples into the support set makes our model more difficult to train and it gets stuck producing CNNs with essentially random outputs. Our solution was to introduce unlabeled samples incrementally during training. This was implemented by masking out some unlabeled samples in the beginning of the training and then gradually reducing the masking probability over time.
4.4 GENERATING MORE MODEL LAYERS
We demonstrated that HT model can outperform MAML++ on common few-shot learning datasets by generating just the last logits layer of the CNN model. But under what conditions can it be advantageous to generate additional CNN layers? As we show here, generating all CNN layers with a multi-layer transformer can lead to a significant performance improvement on CNN models below a particular size.
We demonstrated this by conducting experiments, in which some layers were generated and some layers were learned (usually the first few layers of the CNN). For OMNIGLOT dataset, we saw that both training and test accuracies for a 4-channel and a 6-channel CNNs increased with the number of generated layers (see Fig. 3 and Table 4 in Appendix) and using more complex transformer models with 2 or more encoder layers improved both training and test accuracies of fully-generated CNN models of this size (see Appendix D). However, as the size of the model increased and reached 8 channels, generating the last logits layer alone proved to be sufficient for getting the best results on OMNIGLOT and TIEREDIMAGENET.
The positive effect of generating convolutional layers can also be observed in shallow models with large convolutional kernels with large strides where the model performance can be much more sensitive to a proper choice of model weights. For example, in a 16-channel model with two convolutional kernels of size 9 and the stride of 4, the overall test accuracy for a model generating only the final convolutional layer was about 1% lower than the accuracies of the models generating at least one additional convolutional filter. We also speculate that as the complexity of the task increases, generating some or all intermediate network layers should become more important for achieving optimal performance. Verifying this hypothesis and understanding the “boundary” in the model space between two regimes where a static backbone is sufficient, or no longer sufficient will be the subject of our future work.
5 CONCLUSIONS
In this work, we proposed a HyperTransformer (HT), a novel transformer-based model that generates all weights of a CNN model directly from a few-shot support set. This approach allows us to use a high-capacity model for encoding task-dependent variations in the weights of a smaller model. We demonstrate that generating the last logits layer alone, the transformer-based weight generator beats or matches performance of multiple traditional learning methods on several few-shot benchmarks. More importantly, we showed that HT can be straightforwardly extended to handle unlabeled samples that might be present in the support set and our experiments demonstrate a considerable fewshot performance improvement in the presence of unlabeled data. Finally, we explore the impact of the transformer-encoded model diversity in CNN models of different sizes. We use HT to generate some or all convolutional kernels and biases and show that for sufficiently small models, adjusting all model parameters further improves their few-shot learning performance. | 1. What is the novel approach introduced by the paper in the field of meta-learning and few-shot classification?
2. What are the strengths and weaknesses of the proposed HyperTransformer model compared to existing works?
3. Why does the reviewer suggest focusing more on the final architecture used in the study?
4. Are there any comparisons made between the proposed approach and other 'weight generating' methods?
5. How does the reviewer assess the clarity and quality of the paper's content, including figures and explanations? | Summary Of The Paper
Review | Summary Of The Paper
This paper suggests HyperTransformer, a transformer-based model that produces weights of CNN models in meta-learning setup. Experimental results show that the proposed approach improved the performance of CNN models below a certain size in few-shot classification and semi-supervised few shot classification tasks.
Review
It is quite interesting to see a self-attention computational circuit can be interpreted as applying a single gradient descent step with the cross-entropy loss.
I hate to say, but experimental results are far from state-of-the-art results. why not experimenting with larger CNN and transformers networks?
the replacement of existing CNNs with transformers might not be a meaningful contribution to the community.
Which encoder architecture did you use eventually? (a) or (b)? I personally prefer to focus on the final architecture you used, and focus on arguing more about it.
Any comparisons against other ‘weight generating’ approaches?
[minor comments]
Fonts in the Figure 1 and 2 are too small, hard to see
Could you explain about ‘placeholders’ a bit in detail? I think I get the idea, but it's good to be wordy for a broader audience. |
ICLR | Title
DLP: Data-Driven Label-Poisoning Backdoor Attack
Abstract
Backdoor attacks, which aim to disrupt or paralyze classifiers on specific tasks, are becoming an emerging concern in several learning scenarios, e.g., Machine Learning as a Service (MLaaS). Various backdoor attacks have been introduced in the literature, including perturbation-based methods, which modify a subset of training data; and clean-sample methods, which relabel only a proportion of training samples. Indeed, clean-sample attacks can be particularly stealthy since they never require modifying the samples at the training and test stages. However, the state-of-the-art clean-sample attack of relabelling training data based on their semantic meanings could be ineffective and inefficient in test performances due to heuristic selections of semantic patterns. In this work, we introduce a new type of clean-sample backdoor attack, named as DLP backdoor attack, allowing attackers to backdoor effectively, as measured by test performances, for an arbitrary backdoor sample size. The critical component of DLP is a data-driven backdoor scoring mechanism embedding in a multi-task formulation, which enables attackers to simultaneously perform well on the normal learning tasks and the backdoor tasks. Systematic empirical evaluations show the superior performance of the proposed DLP to state-of-the-art clean-sample attacks.
1 INTRODUCTION
The backdoor attack has been an emerging concern in several deep learning applications owing to their broad applicability and potentially dire consequences (Li et al., 2020). In a high level, a backdoor attack implants triggers into a learning model to achieve two goals simultaneously: (1) to lead the backdoored model to behave maliciously on attacker-specified tasks with an active backdoor trigger, e.g., a camouflage patch as demonstrated in Fig. 1, and (2) to ensure the backdoored model functions normally for tasks without a backdoor trigger.
One popular framework is the perturbation-based backdoor attack (PBA) (Gu et al., 2017; Chen et al., 2017; Turner et al., 2019; Zhao et al., 2020; Doan et al., 2021a;b). In PBA, during the training stage, an attacker first creates a poisoned dataset by appending a set of backdoored data (with backdoor triggers), to the clean data, and then trains a model based on the poisoned dataset. In the test stage, the attacker launches backdoor attacks by adding the same backdoor trigger to the clean test data.
The requirement of accessing and modifying data, including both features and labels, during the training and test stages in PBA could be unrealistic under several applications. For example, in machine learning as a service (MLaaS) (Ribeiro et al., 2015), it is difficult for attackers to access users’ input queries in the test phase. Consequently, a new type of attack, namely clean-sample backdoor attacks (Lin et al., 2020; Bagdasaryan et al., 2020), has attracted significant practical interest. In clean-sample backdoor attacks, the attacker changes labels instead of features in the training stage only as illustrated in Fig. 1 and summarized in Table 1. The state-of-the-art (SOTA) clean-sample attack, known as the semantic backdoor attack (Bagdasaryan et al., 2020), first looks for images with particular semantic meaning, then relabels all the training images with the semantic meaning, e.g., green car in the CIFAR10 dataset, to attacker-specified labels. Finally, the attacker trains a classifier based on the modified data. In the inference stage, no further operations are needed the attacker.
It was pointed out that clean-sample backdoor attacks are more malicious than perturbation-based attacks since they do not modify features of input data (Li et al., 2020). Nevertheless, the SOTA clean-sample method, namely the semantic backdoor attack, is possibly limited in terms of backdoor
20
selection for the following reasons. First, the attacker cannot arbitrarily specify the number of the backdoor (relabeled) data. Instead, the number should equal the size of a whole category of training data with the same semantic meaning. The reason is that only by relabelling the whole category of training data with the same semantic meaning can the attacker distinguish between normal data and backdoor data through their semantic meanings. Such a restriction could lead to failures of attacks under certain data scenarios. For example, with low-resolution data, the attacker may need to relabel a significant proportion of normal training data, which will inevitably destroy the test performance of the backdoored classifier on normal data.
Second, the criterion for selecting semantic patterns are heuristic and may vary drastically among different attackers. As a result, test performances will change significantly. To find the best backdoor criterion, the attacker will try every semantic combination with brute force methods. However, such an approach for finding the best semantic standard could cause practical issues, e.g., computational infeasibility. For instance, there are roughly 4× 1067 ways to select 20 categories out of a total of 20000 categories in the ImageNet (Krizhevsky et al., 2012) for a semantic backdoor attack.
In light of the aforementioned issues, we propose a new type of clean-sample attack named DataDriven Label-Poisoning (DLP) backdoor attack. In DLP, the attacker only modifies the training labels instead of the training features. In contrast to heuristic selections of backdoor patterns in current semantic attacks, in DLP, the attacker selects backdoor samples via a scoring mechanism. Each training sample will be assigned a value by the scoring mechanism to reflect its backdoor effectiveness, measured by the test performances on both the normal and backdoor tasks. Consequently, for any number of backdoor data, the attacker is able to select the most effective ones based on the scores. To the best of the authors’ knowledge, we are the first to mathematically apply the ideas of scoring mechanisms to formulate a backdoor problem and offer corresponding theoretical justifications.
Our contributions in this work are summarized below. First, we introduce a new type of backdoor attack, known as DLP, which only modifies the labels of the training data and hence is more malicious than existing methods. In addition, the proposed DLP backdoor attack enables attackers to select backdoor samples at any level of the backdoor budget (namely the number of backdoor sample) effectively. Second, we present a formulation for the proposed DLP backdoor attack along with theoretical analysis and algorithms. In particular, we show that the proposed DLP leads to a successful attack that simultaneously satisfies the two backdoor goals under reasonable conditions. Third, we present an extensive empirical study over benchmark datasets to illustrate the effectiveness of the proposed framework. We find that experimental results align with existing conjectures and provide insights on efficiently designing backdoor attacks.
The rest of the paper is organized as follows. First, an overview of the related literature is given in Section 2. Then, we formally introduce the DLP in Section 3. Next, we present an implementation of
the proposed DLP and corresponding theoretical analysis in Section 4. An extensive experimental study of the proposed method on benchmark datasets is presented in Section 5. Finally, we conclude the paper in Section 6.
2 RELATED WORK
Data poisoning/Label flipping attacks Data poisoning attacks, including manipulate training features (Biggio et al., 2012; Koh & Liang, 2017; Jagielski et al., 2018; 2021; Weber et al., 2020) and flipping labels (Paudice et al., 2018), aim to achieve attackers’ adversarial goals. Classical data poisoning attacks are untargeted, which aim to deteriorate the overall prediction performances of the classifier (Gao et al., 2020). For example, early backdoor attacks contaminated training data to decrease the overall test accuracy of the trained supported vector machines (Biggio et al., 2012).
Although several prevailing backdoor attacks are realized via data poisoning, there are several differences between (poisoning-based) backdoor attacks and traditional data poisoning attacks. Backdoor attacks are targeted. A backdoored model only misbehaves maliciously on specified tasks in the presence of backdoor triggers while retaining the overall test accuracy of its primary tasks. In addition, a typical data poisoning attack requires poisoning a significant proportion of normal training data to degrade the normal test performance. In contrast, a backdoor attack often only allows modifications on a small subset of training data to maintain good performances on the normal tasks (Li et al., 2020).
Clean-sample backdoor attacks Unlike perturbation-based backdoor attacks (PBA) which modify both the features and labels during the training and the test stage, clean-sample backdoor attacks (Bagdasaryan et al., 2020) (SBA) only poison the labels in the training stage. On the one hand, SBA do not require accessing and changing the features in both training and test stages and therefore are practically stealthier than PBA. On the other hand, SBA only poison the labels of clean data and hence are less powerful than PBA. For example, clean-sample backdoor attacks typically can not simultaneously achieve perfect accuracy on both clean and backdoored test data.
Multi-task learning Multi-task learning (MTL) (Baxter, 2000; Li et al., 2014; Xue et al., 2007; Ruder, 2017) refers to learning several different tasks via a single model. MTL is often implemented in hard and soft parameter sharing. For soft-parameter sharing, all the parameters are private and specific to different tasks. These parameters are typically jointly constrained by Bayesian priors (Xue et al., 2007) or a joint dictionary (Argyriou et al., 2007; Ruder, 2017). In hard-parameter sharing, each task shares some parameters and has task-specific parameters. Our work falls under the umbrella of hard-parameter sharing. Both the normal and backdoor tasks will share the parameter of the learning model, but the backdoor task owns the selection parameter privately. The proposed DLP can be considered a particular type of MTL because there are two goals that are learned jointly with hard parameter sharing.
3 DLP
We consider a classification scenario, where x ∈ X ⊆ Rd, y ∈ Y = [1, . . . ,K], ỹ ∈ Y , denote the features, label, and backdoor target label, respectively. We denote {(xi, yi)}ni=1 as i.i.d training data from a distribution PX,Y . Let fβ : X → Y denote the classifier parameterized by β ∈ Rp. We let ℓ(·, ·) denote the loss function that evaluates the discrepancy between the true label and predicted label. We use ∥ · ∥1 , ∥ · ∥2 and ∥ · ∥∞ to denote the ℓ1, ℓ2 and ℓ∞ vector norm respectively.
3.1 THREAT MODEL
Attacker’s Capacities We consider a clean-sample backdoor attack scenario where an attacker can only access and modify the training labels, but not the training features. We emphasize that the number of training data will remain the same after poisoning.1 Besides, the attacker has control over the training process. However, the attacker does not have control over any procedures in the test stage, including modifying data and deploying model on cloud. The discussed scenario can happen in several real-world applications, e.g., “Outsourcing Training” (Gao et al., 2020). Overall speaking, the threat model of our method poses considerably weaker requirements than the perturbation-based backdoor attacks, and therefore it is stealthier than the current perturbation-based backdoor attacks.
3.2 PROPOSED METHOD
We first briefly outline the overall process of clean-sample backdoor attacks in the followings. In the training stage, attackers first relabel a subset of clean data (denoted as DB) to attacker-specified target label(s), e.g., relabeling images of ‘green cars’ as ‘frogs’. Next, attackers train a classifier to learn those poisoned data (DB) and the remaining clean data. During the test stage, attackers do not need to modify the test data to launch backdoor attacks, unlike perturbation-based attacks where the attacker needs to patch the test data to launch backdoor attack. The reason is that any test input with the same features as DB, e.g., images of ‘green cars’, is expected to be automatically predicted as the attacker-specified target label(s) by the backdoored model.
The most challenging part in clean-sample attacks is to effectively and efficiently select the features in DB to serve as backdoor triggers. Current methods apply the semantic pattern of green cars as backdoor triggers (Bagdasaryan et al., 2020). But the use of semantic patterns as backdoor triggers could be possibly limited in terms of effectiveness and efficiency, such as a restricted number of backdoor data and ineffective selections of backdoor triggers (see Section 1 for detailed discussions). To solve those issues, we propose a data-driven backdoor selection mechanism that enables attackers to select any number of backdoor data effectively. To be precise, the attacker chooses backdoor data via a scoring map that takes features as input and outputs a score
gW : X → [0, 1],
with W ∈ Rq being the parameter. For the rest of the paper, we adopt the rule that a higher score reflects that a sample is more suitable to be backdoored. Intuitively, one can treat the selection mechanism gW as a soft binary classifier for deciding whether an input should be selected as a backdoor candidate. To select backdoor data at any attacker-specified level, the attacker can first sort the scores of all the samples, namely {gW (xi)}ni=1, in descending order, and then pick any attacker-specified quantile of data as backdoor samples.
Next, we will elaborate on how to incorporate the proposed scoring method into the training pipeline. Recall that one of attackers’ goals is to obtain a high accuracy on backdoor data. To fulfill this goal, the attacker should select a backdoor scoring mechanism gW such that the following empirical backdoor risk is minimized for given data {(xi, yi)}ni=1, model β, and a backdoor label ỹ ∈ Y:
R̃n(W,β) := 1
n n∑ i=1 1{yi ̸= ỹ}gW (xi)ℓ(fβ(xi), ỹ), (1)
subject to (C) ∑n
j=1 1{yj ̸= ỹ}gW (xj) = m and gW (xj) ∈ {0, 1} for j = 1, . . . , n. Note that indicators of 1{yi ̸= ỹ} for i = 1, . . . , n are included in the constraint. Applying these terms is because we can not backdoor (relabel) a sample to its originally ground-truth label class. For example, it is meaningless to relabel an image of the cat as a cat for a backdoor attack.
As it is generally accepted in the backdoor literature, attackers should simultaneously achieve high clean accuracy and backdoor accuracy. To meet such a requirement, we use a multi-task learning formulation with weighted summation to design backdoor attacks. The first task is associated with the normal learning task, and the second task corresponds to the backdoor task. Given normal training data {(xi, yi)}ni=1 and a backdoor target label ỹ, the DLP backdoor attack is a pair of minimizer
1This is fundamentally different from the case of perturbation-based attacks where backdoor training data will be injected to the clean training data and therefore the number of total training data will increase.
(W̃ , β̃) of the following:
min W∈Rq,β∈Rp
1
n n∑ i=1 ℓ(fβ(xi), yi) + λ m n∑ j=1 1{yj ̸= ỹ}gW (xj)ℓ(fβ(xj), ỹ) (2)
subject to (C) n∑
j=1
1{yj ̸= ỹ}gW (xj) = m and gW (xj) ∈ {0, 1} for j = 1, . . . , n,
where m is the number of backdoor samples, and λ > 0 is a regularizing coefficient. Remark 1. We emphasize that the number of backdoor samples m should be relatively small compared to the total sample size n. The reason is that an excessive number of backdoor samples will eventually lead to a trivial classifier for normal test data. Thus, the first goal of backdoor attacks will not be met.
Finally yet importantly, we will now discuss the test pipelines of DLP. Given an attacker-specified number of backdoor data m and a trained score mechanism gW , we first sort the scores of training data {gW (xi)}ni=1 in descending order and set gW (xi)(m), namely the m-th largest score, to be a threshold for selecting test backdoor data. For a test input x, if its score gW (x) is greater than gW (xi)(m), then it will be marked as a backdoor sample.
4 CONTINUOUS IMPLEMENTATION AND THEORETICAL RESULTS
4.1 CONTINUOUS IMPLEMENTATION
Directly solving problem (2) is challenging. The main difficulty comes from the fact that {gW (xi)}ni=1 are required to take values in a discrete set, which is not directly compatible with the popular gradientbased optimization framework. To tackle this issue, we first consider an unconstrained version of problem (2) and then propose a regularization term to enforce the selection mechanisms to fulfill the constraint (C). In particular, we consider
Pm(W ) = ( n∑ j=1 1{yj ̸= ỹ}gW (xj)−m)2 + n∑ j=1 gW (xj)(1− gW (xj)),
where m is an attacker-specified number. It is straightforward to observe that the proposed term Pm(W ) equals zero if and only if the constraint (C) is satisfied. In other words, the proposed regularization term will lead a selection mechanism to satisfy the constraint (C) if and only if it is precisely minimized. Consequently, we now propose to consider the following continuous and unconstrained minimization problem:
min W,β
1
n n∑ i=1 ℓ(fβ(xi), yi) + λ m n∑ j=1 1{yj ̸= ỹ}gW (xj)ℓ(fβ(xj), ỹ) + τ n Pm(W ), (3)
where λ, τ > 0 are tuning parameters. One can apply classical gradient-based methods to solve the above problem. Nevertheless, classical optimization methods may lead to convergence issues. We include the rigorous descriptions of the issues and algorithms to solve them in the appendix.
4.2 THEORETICAL RESULTS
In this section, we show that, under reasonable conditions, the proposed DLP leads to an attack that satisfies the two backdoor goals mentioned in last section. All the proof is included in the supplement due to the page limit. We build theoretical results based on the classical notion of the Rademacher Complexity in classical learning theory. The (empirical) Rademacher complexity of the function class F with respect to a probability distribution P over an input space X for i.i.d. sample {xi}ni=1 with size n is: Radn(F) := n−1Eσ [ supf∈F ∑n i=1 σif (xi) ] , where the inner expectation is taken over σ = {σ1, σ2, · · · , σn} and they are independent random variables following the Rademarcher distribution, i.e., P (σi = 1) = P (σi = −1) = 1/2. Next, we present results on the generalization bounds for both normal and backdoor tasks in the followings. We consider a binary classification problem with label set Y := {1,−1} and the backdoor target label ỹ = 1. The test performances are evaluated through R̃(β,W ) := EPX,Y 1{Y ̸=
ỹ}gW (X)ℓ(fβ(X), ỹ) and R(β) = EPX,Y ℓ(fβ(X), Y ) respectively. Denote (W̃ , β̃) to be a backdoor attack obtained by solving problem (3). Also, for ease of notation, we define two new family of functions namely Hβ = {ℓ(fβ(X), Y )|β ∈ Rd} and GW,β = {1{Y ̸= ỹ}gW (X)ℓ(fβ(X), Y )|β ∈ Rd,W ∈ Rp}. Additionally, we will need the following assumption regrading the loss function. Assumption 1. The loss function ℓ(·, ·) is uniformly upper bounded by B. Theorem 1. Suppose that the assumption 1 holds. The followings hold with probability at least 1− 2δ:
• Gap on the normal task: R(β̃)−R(β̂) ≤ λB + 4Radn(Hβ) + 2n−1/2B √ log 1/δ, where
β̂ := argminβ n −1 ∑n i=1 ℓ(fβ(xi), yi) is the normal classifier;
• Gap on the backdoor task: R̃(W̃ , β̃)−R̃(W, β̄) ≤ 4Radn(GW,β)+2n−1/2B(log δ−1)0.5+ 2mτ/λ+Bm/nλ, where (W, β̄) is the optimal backdoor choice, i.e., the minimizier of (1).
The first bound is about the gap in normal test performances between the DLP classifier and the normal classifier. The second bound describes the differences between backdoor test performances of the DLP and the optimal backdoor choice (defined in Section 3.1). The task-priority hyperparameter λ appears in both two loss terms. A small λ tends to generate a backdoor classifier fβ̃ that resemble the normal classifier fβ̂ more. In contrast, an excessively large λ pushes the backdoor classifier fβ̃ to behave similarly to the optimal backdoor classifier fβ .
Two upper bounds also depend on the Rademacher complexity of function classes. If the Rademacher complexity of the function class is bounded, we can obtain vanishing upper bounds by appropriately selecting the hyperparameters. We provide an informal result for linear classifiers in the following. The formal statements and the proof are included in the supplement. Theorem 2. (Informal) Suppose that fβ is a linear classifier. Then, under specific assumptions and appropriately chosen hyperparameters, the gap on normal and backdoor tasks converge to zero with high probability as sample size n → ∞.
5 EXPERIMENTAL STUDY
This section systemically evaluates the proposed method on synthetic data and benchmark datasets. Our empirical study shows the effectiveness of the proposed DLP and improved performances compared with state-of-the-art (SOTA) methods. Interestingly, we demonstrate that the backdoor samples selected by the DLP are semantically close to those from the target labels, which is aligned with existing conjecture (Bagdasaryan et al., 2020). Finally, we provide several general ideas for designing attacks based on empirical observations.
We will use two test measurements throughout this section. The normal accuracy (abbreviated as AccN) is the test accuracy on the normal data, and the backdoor accuracy (abbreviated as AccB) represents the test accuracy on the selected backdoor data. The method for selecting backdoor data is described in the last paragraph in Section 3.
5.1 SYNTHETIC DATA
We evaluate the proposed method on synthetic data. For the training data, we generate 500 i.i.d samples from the normal distribution N ([0, 3], [[2, 0], [0, 2]]) with label 0 and 500 i.i.d samples from the normal distribution N ([0,−3], [[2, 0], [0, 2]]) with label 1 respectively. For the test data, we generate 2500 i.i.d samples for each label class. The classifier fβ is linear, and the loss function ℓ is the logistic loss. Also, the backdoor target ỹ is 1, and the number of backdoor sample m = 50.
Test Performance We summarize test performances under different λ in Table. 2. By setting λ to 0, the objective is to minimize the normal empirical risk only, and hence the resulted classifier is the normal classifier. Similarly, by setting λ to be sufficiently large, e.g., ∞, the resulted classifier is the optimal backdoored classifier. For a wide range of λ, the proposed DLP delivers comparably high accuracy on both normal
and backdoor data compared with the normal and backdoored classifier, respectively.
Selected backdoor sample We plot the normal training data (in orange dots and blue triangles), the selected training backdoor data (in green triangles), the normal classifier (in solid red line), and the backdoor classifier (in gray dash-dot line) with λ = 1 in Fig. 2. The selected backdoor points stay close to the normal classifier, and the DLP classifier moves down compared to the normal classifier. Such an observation can be interpreted in the following way. By relabelling those points near the decision boundary, the attacker slightly shifts the normal classifier to account for those backdoored samples without severely degrading the normal test accuracy. This phenomenon has also been independently discovered and investigated in the early work of poisoning support vector machines (Biggio et al., 2012) and the active learning literature (Settles, 2009), where the goal is to find the most effective and efficient samples to train a classifier.
5.2 REAL-WORLD DATASETS
Tasks We consider the following tasks: (I) 10-class classification problem on the Fashion-MNIST (Xiao et al., 2017) with a LeNet (LeCun et al., 2015), (II) 10-class classification problem on the CIFAR10 (Krizhevsky et al., 2009) data with a Resnet18 (He et al., 2016), and (III) classification problem on GTSRB (Stallkamp et al., 2011). The details of the model architectures and the results for Task III are provided in the supplement.
Backdoor Setup For task I (II), we set the number of backdoor samples m to be 600 (500), and 6000 (5000), which accounts for 1% and 10% of the total training data respectively. For both tasks, the selection mechanism is a two-layer neural network. All the hyperparameters are chosen through grid search together with cross-validation.
Single Task Performance For task I, the AccN of the normal classifier is 92.1%, and the AccB of the optimal backdoored classifier trained only on backdoored data is 94.2%. For task II, the AccN of the normal classifier is 92.1%, and the AccB of the optimal backdoored classifier 93.0%.
5.2.1 TEST PERFORMANCE
For every backdoor target label, we summarize the test performances of DLP in Table. 3 for Task I with m = 600, and for Task II with m = 500. We include the results for other choices of m in the next subsection. Both the normal and backdoor test accuracy of our DLP are comparably high compared to that of the single-task performance, which reflects the effectiveness of the proposed DLP. In task I (Fashion-MNIST), for backdoor target label ‘Sandal’, ‘Sneaker,’ and ‘Ankle Boot’ (referred as ‘Boot’), both the AccN and BccN exceed the test performances of other backdoor labels by a great margin. This observation implies that labels of the shoe category are the most effective for backdoor. Such a result may be because images of ‘Sandal’, ‘Sneaker,’ and ‘Boot’ are semantically similar.
5.2.2 SELECTED BACKDOOR SAMPLES
For task I, we demonstrate the categories of the top 1% selected backdoor training samples in Fig. 3a, and several images of the top-3-category selected training backdoor samples associated with backdoor label ‘Sneaker’ , ‘Trouser’ and ‘Shirt’ in Fig. 3b.
Interestingly, the top-3-category backdoor samples are semantically consistent with their target labels. For example, the images of ‘Sandal’ and ‘Boot’ resemble the images of Sneakers most than any other categories in the Fashion-MNIST dataset. Additionally, images from ‘Dress’ and ‘T-Shirt’ look like those of ‘Shirt’. Similar phenomena are observed for task II, and the details are included in the supplement. Such observations provide concrete evidence to support the conjecture that backdoor images should be similar to their backdoor label category (Bagdasaryan et al., 2020). The similarity is measured in terms of semantic meanings in our case. Based on the discussion, we suggest that, for clean-sample attacks, one should look select ‘similar’ images for backdoors.
5.2.3 EFFECTIVENESS UNDER VARYING BACKDOOR SIZES
We demonstrate the merit of our proposed method on effectively selecting any number of backdoor sample in this section. In particular, we test on several different ratios of the backdoor sample size and summarize the results in Table 4. From the threat model in Section 3.1, the total sample size remains the same after poisoning. Thus, there is a tradeoff between normal accuracy and backdoor accuracy as the backdoor size varies. For example, if the attacker flips 99% of the total training data, then the backdoored classifier will delivery trivial test accuracy on clean test data. Regarding to the empirical results, we observed that the proposed method deliveries both high benign and backdoor accuracy under a reasonable range choices of m, e.g., from 1% to 25%.
5.2.4 COMPARISON WITH STATE-OF-THE-ART METHODS & DEFENSES
We first compare the proposed DLP with two state-of-the-art methods, and then test the performances of DLP under certain defenses. As mentioned in Section 3, the threat model of our/clean-sample attack is more weaker than the perturbation-based attacks. As a result, the most suitable method for comparison is the SOTA clean-sample attack (“Semantic attack”). For the sake of completeness, we also compare our method with the more powerful PBA (“edge-case attack”). Empirically, we find that the DLP outperforms the SOTA clean-sample attack and is comparable with the more powerful edge-case attack in some cases.
Task I Due to the low-resolution property of the Fashion-MNIST dataset, one may not obtain finegrained categories of images other than the original ten classes. Hence, we will compare our proposed method with two SOTA methods on a binary classification task. The new task is to predict if a fashion object is a piece of clothing with label 0 (including ‘Top’, ‘Trouser’, ‘Pullover’, ‘Dress’, ‘Coat’, and ‘Shirt’) or an accessory with label 1 (including ‘Sandal’, ‘Sneaker’, ‘Bag’, ‘ Boot’). We selected a LetNet classifier whose normal accuracy is 97.2%.
We include the backdoor data preparation process for both “edge-case attack” and “semantic attack” in the appendix. The low-resolution property of the Fashion-MNIST dataset leads the above two SOTA methods to coincide with each other. In the following, we set the backdoor target label to be 0. Then, we tested all the possible categories for backdoor samples and summarized the result in Table. 5. The AccN and AccB of the proposed DLP are 94.1% and 94.7%, respectively. We observe that, for a given AccN of 93%, the proposed DLP obtains the highest AccB against the SOTA methods. Alternatively, for a given AccB of 92%, the AccN of the proposed DLP is slightly lower than the SOTA methods (94.5%). The result is because edge-case attacks modify the training data, which should be more potent than our proposed methods. We follow the same setup above to conduct experiments on Task II and Task III. We observe similar results as in the Fashion-MNIST, and include the details in the supplement.
Defenses Because of the weak threat model of our method, many existing defenses against perturbation-based attacks are inappropriate for defending against our attacks. But defenses against label flipping attacks, e.g., label sanitization (Paudice et al., 2018) can be suitably tailored to defend against our method. We found that those defenses are ineffective against our methods. Details are included in the supplementary material.
6 CONCLUSION
In this work, we proposed a new type of clean-sample backdoor attack known as DLP. The proposed DLP allows the attacker to choose any number of backdoor samples effectively in terms of test performance. The key ingredient of developing the DLP is a multi-task learning formulation, enabling the attacker to perform well on both normal and backdoor tasks. There are several interesting future problems. One direction is to characterize the optimal backdoor sample theoretically. Finally, on the defense side, it is necessary to create a new defense mechanism to defend the DLP. The supplementary material contains proofs and more experimental studies.
Appendix for DLP: Data-Driven Label-Poisoning Backdoor Attack
We include the proof of Theorem 1 and the formal statement of Theorem 2 (and its proof) in Section A. The pseudo-code of algorithms for implementing DLP and the convergence analysis are presented in Section B. Complementary results of the experimental study are included in Section C. Finally, We also present additional experimental results, including investigations on the effect of different sizes of the backdoor sample and results on more complex datasets.
A PROOF
A.1 PROOF OF THEOREM 1
We will rely on the following two lemmas. Lemma 1. (Boucheron et al., 2005) For any β ∈ Rd, we have with probability at least 1− δ,
|Rn(β)−R(β)| ≤ 2Radn(Hβ) +B √ log 1δ 2n .
Lemma 2. (Boucheron et al., 2005) For any fixed W ∈ Rp, we have with probability at least 1− δ,
|R̃n(W,β)− R̃(W,β)| ≤ 2Radn(GW,β) +B √ log 1δ 2n
for any β ∈ Rd.
Back to our main results, we first bound the gap on the normal task.
Invoking Lemma 1, with probability at least 1− δ, we have
R(β̃)−R(β̂) ≤ Rn(β̃)−R(β̂) + 2Radn(Hβ) +B √ log 1δ 2n ,
= Rn(β̃) + nλ
m R̃n(W̃ , β̃) +
τ n Pm(W̃ )− nλ m R̃n(W̃ , β̃)
− τ n Pm(W̃ )−R(β̂) + 2Radn(Hβ) +B √ log 1δ 2n ,
≤ Rn(β̂) + nλ
m R̃n(W, β̂) +
τ n Pm(W )− nλ m R̃n(W̃ , β̃)
− τ n Pm(W̃ )−R(β̂) + 2Radn(Hβ) +B √ log 1δ 2n ,
(4)
≤ Rn(β̂)−R(β̂) + nλ
m R̃n(W, β̂) + 2Radn(Hβ) +B √ log 1δ 2n , (5)
where Eq. (4) holds by definitions of (W̃ , β̃) (minimizer) and Eq. (5) holds by definitions of W .
Invoking Lemma 1 on Rn(β̂)−R(β̂) in Eq. (5) with a union bound, with probability greather than 1− 2δ, we have
R(β̃)−R(β̂) ≤ nλ m R̃n(W, β̂) + 4Radn(Hβ) + 2 √ log 1δ 2n . (6)
From the definition of W , there are only m non-negative terms in R̃n(W, β̂) and hence
nλ m R̃n(W, β̂) = nλ m 1 n n∑ i=1 1{yi ̸= ỹ}gW (xi)ℓ(fβ̂(xi), yi) ≤ nλ m m n B = λB, (7)
where B is the uniform upper-bound on the loss function.
Combining Eq. (7) and Eq. (6), the following holds with probability at least 1− 2δ,
R(β̃)−R(β̂) ≤ λB + 4Radn(Hβ) + 2B √ log 2δ n .
Regarding the gap on the backdoor task, we first rewrite
R̃(W̃ , β̃)− R̃(W, β̄) = R̃(W̃ , β̃) + m nλ Rn(β̃) + mτ n2λ Pm(W̃ )
− mτ n2λ Pm(W̃ )− m nλ Rn(β̃)− R̃(W, β̄).
(8)
Invoking Lemma 2, with probability at least 1− δ, the followings hold
R̃(W̃ , β̃)− R̃(W, β̄) ≤ R̃n(W̃ , β̃) + m
nλ Rn(β̃) +
mτ n2λ Pm(W̃ )− mτ n2λ Pm(W̃ )− m nλ Rn(β̃)
− R̃(W, β̄) + 2Radn(GW,β) +B √ log 1δ 2n ,
≤ R̃n(W,β) + m
nλ Rn(β) + |
mτ n2λ Pm(W )− mτ n2λ Pm(W̃ )|
− m nλ Rn(β̃)− R̃(W, β̄) + 2Radn(GW,β) +B √ log 1δ 2n ,
(9)
≤ R̃n(W,β) + m
nλ Rn(β) +
2mτ
λ
− R̃(W, β̄) + 2Radn(GW,β) +B √ log 1δ 2n ,
(10)
where Eq. (9) holds by the definition of (W̃ , β̃) (minimizer) and Eq. (10) is because Pm(·) is upper bounded by 2n2.
Invoking Lemma 2 on R̃n(W,β) − R̃(W, β̄) in Eq. (10) with a union bound, with probability greather than 1− 2δ, we obtain
R̃(W̃ , β̃)− R̃(W, β̄) ≤ 2mτ λ + m nλ Rn(β) + 4Radn(GW,β) + 2B √ log 1δ 2n . (11)
For the term Rn(β), we have
m nλ Rn(β) = m nλ 1 n n∑ i=1 ℓ(fβ(xi), yi) ≤ m nλ Bn n = Bm nλ , (12)
where B is the uniform upper-bound on the loss function.
Combining Eq. (11) and Eq. (12), the following holds with probability at least 1− 2δ,
R̃(W̃ , β̃)− R̃(W, β̄) ≤ Bm nλ + 4Radn(GW,β) + 2B √ log 2δ n + 2mτ λ .
A.2 FORMAL STATEMENTS OF THEOREM 2
We consider linear classifiers of the form fβ(x) = β⊤x for β ∈ Rd, with ∥β∥1 ≤ W1. We assume ∥xi∥∞ ≤ 1 for i = 1, . . . , n. For the loss function, following (Boucheron et al., 2005), we take ℓ(fβ(X), Y ) = ϕ(−fβ(X)Y ) where ϕ : R → R+. Some common examples are ϕ(x) = log(1+ex) for logistic regression and ϕ(x) = max(0, x) for support vector machine.
Assumption 1. ϕ(·) is uniformly upper bounded by B. Assumption 2. ϕ(·) is Lipschitz with constant L i.e., ∥ϕ(x)− ϕ(y)∥ ≤ L∥x− y∥. Assumption 3. t(Z) = gW (Z)ϕ(Z) is Lipschitz with constant L1 i.e., ∥t(x)− t(y)∥ ≤ L1∥x− y∥. Theorem 3. (Linear Case) Suppose that the assumption 1, 2, and 3 hold. The followings hold with probability at least 1− 2δ:
1. Gap on the normal task:
R(β̃)−R(β̂) ≤ λB + 4LW1
√ log d
n + 2B √ log 2δ n ,
2. Gap on the backdoor task:
R̃(W̃ , β̃)− R̃(W, β̄) ≤ Bm nλ + 4L1W1
√ log d
n + 2B √ log 2δ n + 2mτ λ .
Corollary 1. Under the same assumptions of Theorem 3, by setting λ = Θ(1/ log n), m = Θ( √ n) and τ = Θ(1/n), two gaps will converge to zero with high probability as n → ∞.
Proof of Theorem 3 and Corollary 1. We will be using the following two lemmas.
Lemma 3. (Boucheron et al., 2005) For any β ∈ Rd, we have with probability at least 1− δ,
|Rn(β)−R(β)| ≤ 2LW1
√ log d
n +B √ log 1δ 2n .
Lemma 4. For any fixed W ∈ Rp, we have with probability at least (1− δ),
|R̃n(W,β)− R̃(W,β)| ≤ 2L1W1
√ 2 log d
n +B √ log 1δ 2n
for any β ∈ Rd.
The proof of Lemma 4 is attached at the end of proof of the main theorem.
The proof of Theorem 3 directly follows from Theorem 1 by using explicit forms of the Rademacher Complexity in Lemma 3 and Lemma 4.
Regrading the proof of Corollary 1, it is straightforward to check that every term in the two gaps of Theorem 3 will vanish as n → ∞ with specified hyperparameters.
A.3 PROOF OF LEMMA 4
Proof. We will use the following lemma.
Lemma 5 ((Boucheron et al., 2005)). Let F be the class of linear predictors, with the ℓ1-norm of the weights bounded by W1. Also assume that with probability one that ∥x∥∞ ≤ X∞. Then
Radn(F) ≤ X∞W1
√ 2 log d
n ,
where d is the dimension of data and n is the sample size.
Back to the main proof, recall by definition,
R̃n(W,β) = 1
n n∑ j=1 1{yj ̸= 1}gW (xj)ϕ(ỹfβ(xj)).
Since ϕ is uniformly upper bounder by B and gW is upper bounded by one, then by Lemma 2,
|R̃n(W,β)− R̃(W,β)| ≤ 2Radn(GW,β) +B √ log 1δ 2n ,
holds with probability at least 1 − δ where Radn(GW,β) = Eσ[supβ∈Rd 1n ∑n
j=1 σj1{yj ̸= 1}gW (xj)ϕ(fβ(xj))] with P (σi = 1) = P (σi = −1) = 0.5 for i = 1, . . . , n. Without loss of generality, we assume that that there are s samples with ground-truth label 1 with index n− s+ 1, . . . , n. Thus,
2Radn(GW,β) = 2Eσ[ sup β∈Rd
1
n n−s∑ j=1 σjgW (xj)ϕ(fβ(xj))] (13)
≤ 2L1 n Eσ[ sup β∈Rd n−s∑ j=1 σjfβ(xj)], (14)
≤ 2L1W1
√ 2 log d
n , (15)
where Eq. (14) holds by Lipschitz Composition Principle (Boucheron et al., 2005) and Eq. (15) is by Lemma 5.
B ALGORITHMS
In this section, we present algorithms for implementing the proposed DLP. We adopt the state-of-theart techniques in the MTL literature (Sener & Koltun, 2018; Kaiser et al., 2017; Zhou et al., 2017). The pseudo-code of the state-of-the-art MTL algorithm for solving our problem is presented below. The main idea of the algorithm is as follows. We first run gradient descent algorithms on each task in Lines 2-3. Then, to ensure that the overall objective value is decreased, we further update the shared parameter β in Lines 4-5. The rationale for obtaining particular α to ensure the decrease of the overall object value can be found in (Sener & Koltun, 2018). Regarding the convergence analysis, under reasonable conditions, the Procedure 1 is shown to find Pareto stationary points or local/global optimal points. We refer the interested readers to the references (Sener & Koltun, 2018; Kaiser et al., 2017) for details.
Algorithm 1 Input: Initialization: Model Parameter β1, Selection Parameter W 1, Hyperparameters λ, τ and stepsizes
{ηi}T−1i=1
1: for t = 1, . . . , T − 1 do 2: βt = βt − ηt∇βL1(βt) // L1(β) := 1/n ∑n i=1 ℓ(fβ(xi), yi)
3: W t+1 = W t − ηt∇WL2(W t, βt) // L2(W,β) := τ/nPm(W ) + λ/m ∑n
j=1 1{yj ̸= ỹ}gW (xj)ℓ(fβ(xj), ỹ)
4: Obtain αt1 and αt2 with Procedure 2 5: βt+1 = βt − η(αt1∇βL1(βt) + αt2∇βtL2(W,β)) 6: end for
Output: βT , WT
C COMPLEMENTARY EXPERIMENTAL RESULTS
We provide detailed experimental results that are omitted in the main text due to the page limit in this section.
Algorithm 2 Weight Solver Input: Initialization: α = ( α1, α2 ) = ( 1 2 , 1 2 ) , W and β
1: Compute M st. M1,2 = (∇βL1(β))⊤(∇βL2(W,β)), M2,1 = (∇βL2(W,β))⊤(∇βL1(β)) 2: Compute t̂ = argminr ∑ t α
tMrt 3: Compute γ̂ = argminγ ((1− γ)α+ γet̂)
⊤ M ((1− γ)α+ γet̂) 4: α∗ = (1− γ̂)α+ γ̂et̂
Output: α∗
C.1 SELECTED SAMPLES FOR CIFAR10
For task II, we demonstrate the categories of the top 1% selected backdoor training samples in Fig. 4a, and several images of the top-3-category selected training backdoor samples associated with the backdoor label ‘Cat’, ‘Deer’ and ‘Truck’ in Fig. 4b. Similar to the task I in the main text, the top-3-category backdoor samples are semantically consistent with their target labels. For example, the images of ‘Deer’ and ‘Dog’ resemble the images of ‘Horse’ most than any other categories in the Fashion-MNIST dataset. Such observations provide concrete evidence to support the conjecture that backdoor images should be similar to their backdoor label category (Bagdasaryan et al., 2020). The similarity is measured in terms of semantic meanings in our case.
C.2 TEST PERFORMANCES ON GTSRB
In this section, we test the proposed method on the GTSRTB dataset. There are in total 43 types of traffic signs (labels) in GTSRB, and many of them are of the same type, e.g., “20 speed”,“30 speed”. For the sake of clarity, we only select one of each types for testing.
C.3 COMPARISON WITH STATE-OF-THE-ART ON CIFAR10
For semantic backdoor attacks, following the framework in (Bagdasaryan et al., 2020), we relabel a whole category of the training data with the same semantic meaning. For example, we can relabel images of Top (with ground-truth label 1) with label 0. For edge-case attacks, we follow the ideas in (Wang et al., 2020) to first create a new training dataset by excluding the samples of one whole category. Then, we relabel the samples of the previously excluded sub-category and added them to the previous training data to form a new training data.
We follow the same setup for Task II in the main text. The two SOTA methods to be compared are listed below. For semantic attacks, the work of (Bagdasaryan et al., 2020) relabeled a whole category of the training data. For edge-case attacks, the authors in (Wang et al., 2020) first relabeled the images of Southwest Airplanes (NOT in CIFAR10) as Truck and then injected the relabeled sample-label pairs into the training data. For completeness, we also test for different backdoor labels as summarized in Table 6. The proposed DLP consistently outperforms the SOTA clean-sample attack for all backdoor target labels. Also, the proposed DLP is comparable with the more powerful edge-case attack in some cases, e.g., a backdoor label of ‘Dog’.
Method DLP Semantic Edge-case
C.4 EXPERIMENTAL RESULTS UNDER DEFENSE MECHANISMS
In this section, we test the proposed method under certain defenses. As mentioned in the main text, because of the weak threat model of (centralized) clean-sample backdoor attacks, many existing defenses against perturbation-based attacks are inappropriate for defending against our attacks.
But defenses against label flipping attacks, e.g., label sanitization (Paudice et al., 2018) and label ceritication (Rosenfeld et al., 2020) can be suitably tailored to defend against our method. The results are summarized in Fig. 6 and Fig. 7 respectively. It can be concluded from the figure that the proposed method can escape the two defenses. | 1. What is the focus of the paper on backdoor attacks?
2. What are the strengths of the proposed approach, particularly its difference from previous methods?
3. Do you have any concerns about the premise of the proposed method/problem?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. What are the weaknesses of the paper regarding its experimental comparisons and limitations? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a new attack method. Unlike previous methods that perturbs the training data, the proposed method only perturb the label of a selected subset of training data. This essentially finds a small population of data and train the model to misclassify them consistently. The benefit is that at inference stage, the attacker does not need to manipulate test images. The model will automatically misclassify the selected population. Experimental results show that the method can train models with satisfying clean data accuracy and misclassification rate, even with a small poisoning rate.
Strengths And Weaknesses
The paper is well written and clearly presented. The experimental results show the effectiveness of the method and the visual examples provide good illustrative examples.
My concerns are mainly regarding the premise of the proposed method/problem and insufficient experiments.
In my opinion, during backdoor attacks, creating out-of-distribution data (whether through subtle manipulation of the image, like WaNet, or explicit insertion of triggers) at the test time is necessary. The attacked model should behave normally at any in-distribution data, as users expected. If a model always misclassifies "green cars" as "frogs", it defeats the purpose of a stealthy attack. This makes it hard to be convinced the overall approach of the paper is a valid attacking scenario.
Furthermore, although the experiments are illustrative of the proposed idea. They are far from sufficient. Currently, only one clean-sample attack is compared with. Considering many attacking and defending algorithms have been developed in recent years, the method should be compared with these attacking methods with different poisoning rate. Also the method should be evaluated against defending/detection algorithms to show it is stealthy enough. It would also be interesting to see how the method performs using adversarial training.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written. The proposed idea is sufficiently novel and should be reproducible if the code is shared. |
ICLR | Title
DLP: Data-Driven Label-Poisoning Backdoor Attack
Abstract
Backdoor attacks, which aim to disrupt or paralyze classifiers on specific tasks, are becoming an emerging concern in several learning scenarios, e.g., Machine Learning as a Service (MLaaS). Various backdoor attacks have been introduced in the literature, including perturbation-based methods, which modify a subset of training data; and clean-sample methods, which relabel only a proportion of training samples. Indeed, clean-sample attacks can be particularly stealthy since they never require modifying the samples at the training and test stages. However, the state-of-the-art clean-sample attack of relabelling training data based on their semantic meanings could be ineffective and inefficient in test performances due to heuristic selections of semantic patterns. In this work, we introduce a new type of clean-sample backdoor attack, named as DLP backdoor attack, allowing attackers to backdoor effectively, as measured by test performances, for an arbitrary backdoor sample size. The critical component of DLP is a data-driven backdoor scoring mechanism embedding in a multi-task formulation, which enables attackers to simultaneously perform well on the normal learning tasks and the backdoor tasks. Systematic empirical evaluations show the superior performance of the proposed DLP to state-of-the-art clean-sample attacks.
1 INTRODUCTION
The backdoor attack has been an emerging concern in several deep learning applications owing to their broad applicability and potentially dire consequences (Li et al., 2020). In a high level, a backdoor attack implants triggers into a learning model to achieve two goals simultaneously: (1) to lead the backdoored model to behave maliciously on attacker-specified tasks with an active backdoor trigger, e.g., a camouflage patch as demonstrated in Fig. 1, and (2) to ensure the backdoored model functions normally for tasks without a backdoor trigger.
One popular framework is the perturbation-based backdoor attack (PBA) (Gu et al., 2017; Chen et al., 2017; Turner et al., 2019; Zhao et al., 2020; Doan et al., 2021a;b). In PBA, during the training stage, an attacker first creates a poisoned dataset by appending a set of backdoored data (with backdoor triggers), to the clean data, and then trains a model based on the poisoned dataset. In the test stage, the attacker launches backdoor attacks by adding the same backdoor trigger to the clean test data.
The requirement of accessing and modifying data, including both features and labels, during the training and test stages in PBA could be unrealistic under several applications. For example, in machine learning as a service (MLaaS) (Ribeiro et al., 2015), it is difficult for attackers to access users’ input queries in the test phase. Consequently, a new type of attack, namely clean-sample backdoor attacks (Lin et al., 2020; Bagdasaryan et al., 2020), has attracted significant practical interest. In clean-sample backdoor attacks, the attacker changes labels instead of features in the training stage only as illustrated in Fig. 1 and summarized in Table 1. The state-of-the-art (SOTA) clean-sample attack, known as the semantic backdoor attack (Bagdasaryan et al., 2020), first looks for images with particular semantic meaning, then relabels all the training images with the semantic meaning, e.g., green car in the CIFAR10 dataset, to attacker-specified labels. Finally, the attacker trains a classifier based on the modified data. In the inference stage, no further operations are needed the attacker.
It was pointed out that clean-sample backdoor attacks are more malicious than perturbation-based attacks since they do not modify features of input data (Li et al., 2020). Nevertheless, the SOTA clean-sample method, namely the semantic backdoor attack, is possibly limited in terms of backdoor
20
selection for the following reasons. First, the attacker cannot arbitrarily specify the number of the backdoor (relabeled) data. Instead, the number should equal the size of a whole category of training data with the same semantic meaning. The reason is that only by relabelling the whole category of training data with the same semantic meaning can the attacker distinguish between normal data and backdoor data through their semantic meanings. Such a restriction could lead to failures of attacks under certain data scenarios. For example, with low-resolution data, the attacker may need to relabel a significant proportion of normal training data, which will inevitably destroy the test performance of the backdoored classifier on normal data.
Second, the criterion for selecting semantic patterns are heuristic and may vary drastically among different attackers. As a result, test performances will change significantly. To find the best backdoor criterion, the attacker will try every semantic combination with brute force methods. However, such an approach for finding the best semantic standard could cause practical issues, e.g., computational infeasibility. For instance, there are roughly 4× 1067 ways to select 20 categories out of a total of 20000 categories in the ImageNet (Krizhevsky et al., 2012) for a semantic backdoor attack.
In light of the aforementioned issues, we propose a new type of clean-sample attack named DataDriven Label-Poisoning (DLP) backdoor attack. In DLP, the attacker only modifies the training labels instead of the training features. In contrast to heuristic selections of backdoor patterns in current semantic attacks, in DLP, the attacker selects backdoor samples via a scoring mechanism. Each training sample will be assigned a value by the scoring mechanism to reflect its backdoor effectiveness, measured by the test performances on both the normal and backdoor tasks. Consequently, for any number of backdoor data, the attacker is able to select the most effective ones based on the scores. To the best of the authors’ knowledge, we are the first to mathematically apply the ideas of scoring mechanisms to formulate a backdoor problem and offer corresponding theoretical justifications.
Our contributions in this work are summarized below. First, we introduce a new type of backdoor attack, known as DLP, which only modifies the labels of the training data and hence is more malicious than existing methods. In addition, the proposed DLP backdoor attack enables attackers to select backdoor samples at any level of the backdoor budget (namely the number of backdoor sample) effectively. Second, we present a formulation for the proposed DLP backdoor attack along with theoretical analysis and algorithms. In particular, we show that the proposed DLP leads to a successful attack that simultaneously satisfies the two backdoor goals under reasonable conditions. Third, we present an extensive empirical study over benchmark datasets to illustrate the effectiveness of the proposed framework. We find that experimental results align with existing conjectures and provide insights on efficiently designing backdoor attacks.
The rest of the paper is organized as follows. First, an overview of the related literature is given in Section 2. Then, we formally introduce the DLP in Section 3. Next, we present an implementation of
the proposed DLP and corresponding theoretical analysis in Section 4. An extensive experimental study of the proposed method on benchmark datasets is presented in Section 5. Finally, we conclude the paper in Section 6.
2 RELATED WORK
Data poisoning/Label flipping attacks Data poisoning attacks, including manipulate training features (Biggio et al., 2012; Koh & Liang, 2017; Jagielski et al., 2018; 2021; Weber et al., 2020) and flipping labels (Paudice et al., 2018), aim to achieve attackers’ adversarial goals. Classical data poisoning attacks are untargeted, which aim to deteriorate the overall prediction performances of the classifier (Gao et al., 2020). For example, early backdoor attacks contaminated training data to decrease the overall test accuracy of the trained supported vector machines (Biggio et al., 2012).
Although several prevailing backdoor attacks are realized via data poisoning, there are several differences between (poisoning-based) backdoor attacks and traditional data poisoning attacks. Backdoor attacks are targeted. A backdoored model only misbehaves maliciously on specified tasks in the presence of backdoor triggers while retaining the overall test accuracy of its primary tasks. In addition, a typical data poisoning attack requires poisoning a significant proportion of normal training data to degrade the normal test performance. In contrast, a backdoor attack often only allows modifications on a small subset of training data to maintain good performances on the normal tasks (Li et al., 2020).
Clean-sample backdoor attacks Unlike perturbation-based backdoor attacks (PBA) which modify both the features and labels during the training and the test stage, clean-sample backdoor attacks (Bagdasaryan et al., 2020) (SBA) only poison the labels in the training stage. On the one hand, SBA do not require accessing and changing the features in both training and test stages and therefore are practically stealthier than PBA. On the other hand, SBA only poison the labels of clean data and hence are less powerful than PBA. For example, clean-sample backdoor attacks typically can not simultaneously achieve perfect accuracy on both clean and backdoored test data.
Multi-task learning Multi-task learning (MTL) (Baxter, 2000; Li et al., 2014; Xue et al., 2007; Ruder, 2017) refers to learning several different tasks via a single model. MTL is often implemented in hard and soft parameter sharing. For soft-parameter sharing, all the parameters are private and specific to different tasks. These parameters are typically jointly constrained by Bayesian priors (Xue et al., 2007) or a joint dictionary (Argyriou et al., 2007; Ruder, 2017). In hard-parameter sharing, each task shares some parameters and has task-specific parameters. Our work falls under the umbrella of hard-parameter sharing. Both the normal and backdoor tasks will share the parameter of the learning model, but the backdoor task owns the selection parameter privately. The proposed DLP can be considered a particular type of MTL because there are two goals that are learned jointly with hard parameter sharing.
3 DLP
We consider a classification scenario, where x ∈ X ⊆ Rd, y ∈ Y = [1, . . . ,K], ỹ ∈ Y , denote the features, label, and backdoor target label, respectively. We denote {(xi, yi)}ni=1 as i.i.d training data from a distribution PX,Y . Let fβ : X → Y denote the classifier parameterized by β ∈ Rp. We let ℓ(·, ·) denote the loss function that evaluates the discrepancy between the true label and predicted label. We use ∥ · ∥1 , ∥ · ∥2 and ∥ · ∥∞ to denote the ℓ1, ℓ2 and ℓ∞ vector norm respectively.
3.1 THREAT MODEL
Attacker’s Capacities We consider a clean-sample backdoor attack scenario where an attacker can only access and modify the training labels, but not the training features. We emphasize that the number of training data will remain the same after poisoning.1 Besides, the attacker has control over the training process. However, the attacker does not have control over any procedures in the test stage, including modifying data and deploying model on cloud. The discussed scenario can happen in several real-world applications, e.g., “Outsourcing Training” (Gao et al., 2020). Overall speaking, the threat model of our method poses considerably weaker requirements than the perturbation-based backdoor attacks, and therefore it is stealthier than the current perturbation-based backdoor attacks.
3.2 PROPOSED METHOD
We first briefly outline the overall process of clean-sample backdoor attacks in the followings. In the training stage, attackers first relabel a subset of clean data (denoted as DB) to attacker-specified target label(s), e.g., relabeling images of ‘green cars’ as ‘frogs’. Next, attackers train a classifier to learn those poisoned data (DB) and the remaining clean data. During the test stage, attackers do not need to modify the test data to launch backdoor attacks, unlike perturbation-based attacks where the attacker needs to patch the test data to launch backdoor attack. The reason is that any test input with the same features as DB, e.g., images of ‘green cars’, is expected to be automatically predicted as the attacker-specified target label(s) by the backdoored model.
The most challenging part in clean-sample attacks is to effectively and efficiently select the features in DB to serve as backdoor triggers. Current methods apply the semantic pattern of green cars as backdoor triggers (Bagdasaryan et al., 2020). But the use of semantic patterns as backdoor triggers could be possibly limited in terms of effectiveness and efficiency, such as a restricted number of backdoor data and ineffective selections of backdoor triggers (see Section 1 for detailed discussions). To solve those issues, we propose a data-driven backdoor selection mechanism that enables attackers to select any number of backdoor data effectively. To be precise, the attacker chooses backdoor data via a scoring map that takes features as input and outputs a score
gW : X → [0, 1],
with W ∈ Rq being the parameter. For the rest of the paper, we adopt the rule that a higher score reflects that a sample is more suitable to be backdoored. Intuitively, one can treat the selection mechanism gW as a soft binary classifier for deciding whether an input should be selected as a backdoor candidate. To select backdoor data at any attacker-specified level, the attacker can first sort the scores of all the samples, namely {gW (xi)}ni=1, in descending order, and then pick any attacker-specified quantile of data as backdoor samples.
Next, we will elaborate on how to incorporate the proposed scoring method into the training pipeline. Recall that one of attackers’ goals is to obtain a high accuracy on backdoor data. To fulfill this goal, the attacker should select a backdoor scoring mechanism gW such that the following empirical backdoor risk is minimized for given data {(xi, yi)}ni=1, model β, and a backdoor label ỹ ∈ Y:
R̃n(W,β) := 1
n n∑ i=1 1{yi ̸= ỹ}gW (xi)ℓ(fβ(xi), ỹ), (1)
subject to (C) ∑n
j=1 1{yj ̸= ỹ}gW (xj) = m and gW (xj) ∈ {0, 1} for j = 1, . . . , n. Note that indicators of 1{yi ̸= ỹ} for i = 1, . . . , n are included in the constraint. Applying these terms is because we can not backdoor (relabel) a sample to its originally ground-truth label class. For example, it is meaningless to relabel an image of the cat as a cat for a backdoor attack.
As it is generally accepted in the backdoor literature, attackers should simultaneously achieve high clean accuracy and backdoor accuracy. To meet such a requirement, we use a multi-task learning formulation with weighted summation to design backdoor attacks. The first task is associated with the normal learning task, and the second task corresponds to the backdoor task. Given normal training data {(xi, yi)}ni=1 and a backdoor target label ỹ, the DLP backdoor attack is a pair of minimizer
1This is fundamentally different from the case of perturbation-based attacks where backdoor training data will be injected to the clean training data and therefore the number of total training data will increase.
(W̃ , β̃) of the following:
min W∈Rq,β∈Rp
1
n n∑ i=1 ℓ(fβ(xi), yi) + λ m n∑ j=1 1{yj ̸= ỹ}gW (xj)ℓ(fβ(xj), ỹ) (2)
subject to (C) n∑
j=1
1{yj ̸= ỹ}gW (xj) = m and gW (xj) ∈ {0, 1} for j = 1, . . . , n,
where m is the number of backdoor samples, and λ > 0 is a regularizing coefficient. Remark 1. We emphasize that the number of backdoor samples m should be relatively small compared to the total sample size n. The reason is that an excessive number of backdoor samples will eventually lead to a trivial classifier for normal test data. Thus, the first goal of backdoor attacks will not be met.
Finally yet importantly, we will now discuss the test pipelines of DLP. Given an attacker-specified number of backdoor data m and a trained score mechanism gW , we first sort the scores of training data {gW (xi)}ni=1 in descending order and set gW (xi)(m), namely the m-th largest score, to be a threshold for selecting test backdoor data. For a test input x, if its score gW (x) is greater than gW (xi)(m), then it will be marked as a backdoor sample.
4 CONTINUOUS IMPLEMENTATION AND THEORETICAL RESULTS
4.1 CONTINUOUS IMPLEMENTATION
Directly solving problem (2) is challenging. The main difficulty comes from the fact that {gW (xi)}ni=1 are required to take values in a discrete set, which is not directly compatible with the popular gradientbased optimization framework. To tackle this issue, we first consider an unconstrained version of problem (2) and then propose a regularization term to enforce the selection mechanisms to fulfill the constraint (C). In particular, we consider
Pm(W ) = ( n∑ j=1 1{yj ̸= ỹ}gW (xj)−m)2 + n∑ j=1 gW (xj)(1− gW (xj)),
where m is an attacker-specified number. It is straightforward to observe that the proposed term Pm(W ) equals zero if and only if the constraint (C) is satisfied. In other words, the proposed regularization term will lead a selection mechanism to satisfy the constraint (C) if and only if it is precisely minimized. Consequently, we now propose to consider the following continuous and unconstrained minimization problem:
min W,β
1
n n∑ i=1 ℓ(fβ(xi), yi) + λ m n∑ j=1 1{yj ̸= ỹ}gW (xj)ℓ(fβ(xj), ỹ) + τ n Pm(W ), (3)
where λ, τ > 0 are tuning parameters. One can apply classical gradient-based methods to solve the above problem. Nevertheless, classical optimization methods may lead to convergence issues. We include the rigorous descriptions of the issues and algorithms to solve them in the appendix.
4.2 THEORETICAL RESULTS
In this section, we show that, under reasonable conditions, the proposed DLP leads to an attack that satisfies the two backdoor goals mentioned in last section. All the proof is included in the supplement due to the page limit. We build theoretical results based on the classical notion of the Rademacher Complexity in classical learning theory. The (empirical) Rademacher complexity of the function class F with respect to a probability distribution P over an input space X for i.i.d. sample {xi}ni=1 with size n is: Radn(F) := n−1Eσ [ supf∈F ∑n i=1 σif (xi) ] , where the inner expectation is taken over σ = {σ1, σ2, · · · , σn} and they are independent random variables following the Rademarcher distribution, i.e., P (σi = 1) = P (σi = −1) = 1/2. Next, we present results on the generalization bounds for both normal and backdoor tasks in the followings. We consider a binary classification problem with label set Y := {1,−1} and the backdoor target label ỹ = 1. The test performances are evaluated through R̃(β,W ) := EPX,Y 1{Y ̸=
ỹ}gW (X)ℓ(fβ(X), ỹ) and R(β) = EPX,Y ℓ(fβ(X), Y ) respectively. Denote (W̃ , β̃) to be a backdoor attack obtained by solving problem (3). Also, for ease of notation, we define two new family of functions namely Hβ = {ℓ(fβ(X), Y )|β ∈ Rd} and GW,β = {1{Y ̸= ỹ}gW (X)ℓ(fβ(X), Y )|β ∈ Rd,W ∈ Rp}. Additionally, we will need the following assumption regrading the loss function. Assumption 1. The loss function ℓ(·, ·) is uniformly upper bounded by B. Theorem 1. Suppose that the assumption 1 holds. The followings hold with probability at least 1− 2δ:
• Gap on the normal task: R(β̃)−R(β̂) ≤ λB + 4Radn(Hβ) + 2n−1/2B √ log 1/δ, where
β̂ := argminβ n −1 ∑n i=1 ℓ(fβ(xi), yi) is the normal classifier;
• Gap on the backdoor task: R̃(W̃ , β̃)−R̃(W, β̄) ≤ 4Radn(GW,β)+2n−1/2B(log δ−1)0.5+ 2mτ/λ+Bm/nλ, where (W, β̄) is the optimal backdoor choice, i.e., the minimizier of (1).
The first bound is about the gap in normal test performances between the DLP classifier and the normal classifier. The second bound describes the differences between backdoor test performances of the DLP and the optimal backdoor choice (defined in Section 3.1). The task-priority hyperparameter λ appears in both two loss terms. A small λ tends to generate a backdoor classifier fβ̃ that resemble the normal classifier fβ̂ more. In contrast, an excessively large λ pushes the backdoor classifier fβ̃ to behave similarly to the optimal backdoor classifier fβ .
Two upper bounds also depend on the Rademacher complexity of function classes. If the Rademacher complexity of the function class is bounded, we can obtain vanishing upper bounds by appropriately selecting the hyperparameters. We provide an informal result for linear classifiers in the following. The formal statements and the proof are included in the supplement. Theorem 2. (Informal) Suppose that fβ is a linear classifier. Then, under specific assumptions and appropriately chosen hyperparameters, the gap on normal and backdoor tasks converge to zero with high probability as sample size n → ∞.
5 EXPERIMENTAL STUDY
This section systemically evaluates the proposed method on synthetic data and benchmark datasets. Our empirical study shows the effectiveness of the proposed DLP and improved performances compared with state-of-the-art (SOTA) methods. Interestingly, we demonstrate that the backdoor samples selected by the DLP are semantically close to those from the target labels, which is aligned with existing conjecture (Bagdasaryan et al., 2020). Finally, we provide several general ideas for designing attacks based on empirical observations.
We will use two test measurements throughout this section. The normal accuracy (abbreviated as AccN) is the test accuracy on the normal data, and the backdoor accuracy (abbreviated as AccB) represents the test accuracy on the selected backdoor data. The method for selecting backdoor data is described in the last paragraph in Section 3.
5.1 SYNTHETIC DATA
We evaluate the proposed method on synthetic data. For the training data, we generate 500 i.i.d samples from the normal distribution N ([0, 3], [[2, 0], [0, 2]]) with label 0 and 500 i.i.d samples from the normal distribution N ([0,−3], [[2, 0], [0, 2]]) with label 1 respectively. For the test data, we generate 2500 i.i.d samples for each label class. The classifier fβ is linear, and the loss function ℓ is the logistic loss. Also, the backdoor target ỹ is 1, and the number of backdoor sample m = 50.
Test Performance We summarize test performances under different λ in Table. 2. By setting λ to 0, the objective is to minimize the normal empirical risk only, and hence the resulted classifier is the normal classifier. Similarly, by setting λ to be sufficiently large, e.g., ∞, the resulted classifier is the optimal backdoored classifier. For a wide range of λ, the proposed DLP delivers comparably high accuracy on both normal
and backdoor data compared with the normal and backdoored classifier, respectively.
Selected backdoor sample We plot the normal training data (in orange dots and blue triangles), the selected training backdoor data (in green triangles), the normal classifier (in solid red line), and the backdoor classifier (in gray dash-dot line) with λ = 1 in Fig. 2. The selected backdoor points stay close to the normal classifier, and the DLP classifier moves down compared to the normal classifier. Such an observation can be interpreted in the following way. By relabelling those points near the decision boundary, the attacker slightly shifts the normal classifier to account for those backdoored samples without severely degrading the normal test accuracy. This phenomenon has also been independently discovered and investigated in the early work of poisoning support vector machines (Biggio et al., 2012) and the active learning literature (Settles, 2009), where the goal is to find the most effective and efficient samples to train a classifier.
5.2 REAL-WORLD DATASETS
Tasks We consider the following tasks: (I) 10-class classification problem on the Fashion-MNIST (Xiao et al., 2017) with a LeNet (LeCun et al., 2015), (II) 10-class classification problem on the CIFAR10 (Krizhevsky et al., 2009) data with a Resnet18 (He et al., 2016), and (III) classification problem on GTSRB (Stallkamp et al., 2011). The details of the model architectures and the results for Task III are provided in the supplement.
Backdoor Setup For task I (II), we set the number of backdoor samples m to be 600 (500), and 6000 (5000), which accounts for 1% and 10% of the total training data respectively. For both tasks, the selection mechanism is a two-layer neural network. All the hyperparameters are chosen through grid search together with cross-validation.
Single Task Performance For task I, the AccN of the normal classifier is 92.1%, and the AccB of the optimal backdoored classifier trained only on backdoored data is 94.2%. For task II, the AccN of the normal classifier is 92.1%, and the AccB of the optimal backdoored classifier 93.0%.
5.2.1 TEST PERFORMANCE
For every backdoor target label, we summarize the test performances of DLP in Table. 3 for Task I with m = 600, and for Task II with m = 500. We include the results for other choices of m in the next subsection. Both the normal and backdoor test accuracy of our DLP are comparably high compared to that of the single-task performance, which reflects the effectiveness of the proposed DLP. In task I (Fashion-MNIST), for backdoor target label ‘Sandal’, ‘Sneaker,’ and ‘Ankle Boot’ (referred as ‘Boot’), both the AccN and BccN exceed the test performances of other backdoor labels by a great margin. This observation implies that labels of the shoe category are the most effective for backdoor. Such a result may be because images of ‘Sandal’, ‘Sneaker,’ and ‘Boot’ are semantically similar.
5.2.2 SELECTED BACKDOOR SAMPLES
For task I, we demonstrate the categories of the top 1% selected backdoor training samples in Fig. 3a, and several images of the top-3-category selected training backdoor samples associated with backdoor label ‘Sneaker’ , ‘Trouser’ and ‘Shirt’ in Fig. 3b.
Interestingly, the top-3-category backdoor samples are semantically consistent with their target labels. For example, the images of ‘Sandal’ and ‘Boot’ resemble the images of Sneakers most than any other categories in the Fashion-MNIST dataset. Additionally, images from ‘Dress’ and ‘T-Shirt’ look like those of ‘Shirt’. Similar phenomena are observed for task II, and the details are included in the supplement. Such observations provide concrete evidence to support the conjecture that backdoor images should be similar to their backdoor label category (Bagdasaryan et al., 2020). The similarity is measured in terms of semantic meanings in our case. Based on the discussion, we suggest that, for clean-sample attacks, one should look select ‘similar’ images for backdoors.
5.2.3 EFFECTIVENESS UNDER VARYING BACKDOOR SIZES
We demonstrate the merit of our proposed method on effectively selecting any number of backdoor sample in this section. In particular, we test on several different ratios of the backdoor sample size and summarize the results in Table 4. From the threat model in Section 3.1, the total sample size remains the same after poisoning. Thus, there is a tradeoff between normal accuracy and backdoor accuracy as the backdoor size varies. For example, if the attacker flips 99% of the total training data, then the backdoored classifier will delivery trivial test accuracy on clean test data. Regarding to the empirical results, we observed that the proposed method deliveries both high benign and backdoor accuracy under a reasonable range choices of m, e.g., from 1% to 25%.
5.2.4 COMPARISON WITH STATE-OF-THE-ART METHODS & DEFENSES
We first compare the proposed DLP with two state-of-the-art methods, and then test the performances of DLP under certain defenses. As mentioned in Section 3, the threat model of our/clean-sample attack is more weaker than the perturbation-based attacks. As a result, the most suitable method for comparison is the SOTA clean-sample attack (“Semantic attack”). For the sake of completeness, we also compare our method with the more powerful PBA (“edge-case attack”). Empirically, we find that the DLP outperforms the SOTA clean-sample attack and is comparable with the more powerful edge-case attack in some cases.
Task I Due to the low-resolution property of the Fashion-MNIST dataset, one may not obtain finegrained categories of images other than the original ten classes. Hence, we will compare our proposed method with two SOTA methods on a binary classification task. The new task is to predict if a fashion object is a piece of clothing with label 0 (including ‘Top’, ‘Trouser’, ‘Pullover’, ‘Dress’, ‘Coat’, and ‘Shirt’) or an accessory with label 1 (including ‘Sandal’, ‘Sneaker’, ‘Bag’, ‘ Boot’). We selected a LetNet classifier whose normal accuracy is 97.2%.
We include the backdoor data preparation process for both “edge-case attack” and “semantic attack” in the appendix. The low-resolution property of the Fashion-MNIST dataset leads the above two SOTA methods to coincide with each other. In the following, we set the backdoor target label to be 0. Then, we tested all the possible categories for backdoor samples and summarized the result in Table. 5. The AccN and AccB of the proposed DLP are 94.1% and 94.7%, respectively. We observe that, for a given AccN of 93%, the proposed DLP obtains the highest AccB against the SOTA methods. Alternatively, for a given AccB of 92%, the AccN of the proposed DLP is slightly lower than the SOTA methods (94.5%). The result is because edge-case attacks modify the training data, which should be more potent than our proposed methods. We follow the same setup above to conduct experiments on Task II and Task III. We observe similar results as in the Fashion-MNIST, and include the details in the supplement.
Defenses Because of the weak threat model of our method, many existing defenses against perturbation-based attacks are inappropriate for defending against our attacks. But defenses against label flipping attacks, e.g., label sanitization (Paudice et al., 2018) can be suitably tailored to defend against our method. We found that those defenses are ineffective against our methods. Details are included in the supplementary material.
6 CONCLUSION
In this work, we proposed a new type of clean-sample backdoor attack known as DLP. The proposed DLP allows the attacker to choose any number of backdoor samples effectively in terms of test performance. The key ingredient of developing the DLP is a multi-task learning formulation, enabling the attacker to perform well on both normal and backdoor tasks. There are several interesting future problems. One direction is to characterize the optimal backdoor sample theoretically. Finally, on the defense side, it is necessary to create a new defense mechanism to defend the DLP. The supplementary material contains proofs and more experimental studies.
Appendix for DLP: Data-Driven Label-Poisoning Backdoor Attack
We include the proof of Theorem 1 and the formal statement of Theorem 2 (and its proof) in Section A. The pseudo-code of algorithms for implementing DLP and the convergence analysis are presented in Section B. Complementary results of the experimental study are included in Section C. Finally, We also present additional experimental results, including investigations on the effect of different sizes of the backdoor sample and results on more complex datasets.
A PROOF
A.1 PROOF OF THEOREM 1
We will rely on the following two lemmas. Lemma 1. (Boucheron et al., 2005) For any β ∈ Rd, we have with probability at least 1− δ,
|Rn(β)−R(β)| ≤ 2Radn(Hβ) +B √ log 1δ 2n .
Lemma 2. (Boucheron et al., 2005) For any fixed W ∈ Rp, we have with probability at least 1− δ,
|R̃n(W,β)− R̃(W,β)| ≤ 2Radn(GW,β) +B √ log 1δ 2n
for any β ∈ Rd.
Back to our main results, we first bound the gap on the normal task.
Invoking Lemma 1, with probability at least 1− δ, we have
R(β̃)−R(β̂) ≤ Rn(β̃)−R(β̂) + 2Radn(Hβ) +B √ log 1δ 2n ,
= Rn(β̃) + nλ
m R̃n(W̃ , β̃) +
τ n Pm(W̃ )− nλ m R̃n(W̃ , β̃)
− τ n Pm(W̃ )−R(β̂) + 2Radn(Hβ) +B √ log 1δ 2n ,
≤ Rn(β̂) + nλ
m R̃n(W, β̂) +
τ n Pm(W )− nλ m R̃n(W̃ , β̃)
− τ n Pm(W̃ )−R(β̂) + 2Radn(Hβ) +B √ log 1δ 2n ,
(4)
≤ Rn(β̂)−R(β̂) + nλ
m R̃n(W, β̂) + 2Radn(Hβ) +B √ log 1δ 2n , (5)
where Eq. (4) holds by definitions of (W̃ , β̃) (minimizer) and Eq. (5) holds by definitions of W .
Invoking Lemma 1 on Rn(β̂)−R(β̂) in Eq. (5) with a union bound, with probability greather than 1− 2δ, we have
R(β̃)−R(β̂) ≤ nλ m R̃n(W, β̂) + 4Radn(Hβ) + 2 √ log 1δ 2n . (6)
From the definition of W , there are only m non-negative terms in R̃n(W, β̂) and hence
nλ m R̃n(W, β̂) = nλ m 1 n n∑ i=1 1{yi ̸= ỹ}gW (xi)ℓ(fβ̂(xi), yi) ≤ nλ m m n B = λB, (7)
where B is the uniform upper-bound on the loss function.
Combining Eq. (7) and Eq. (6), the following holds with probability at least 1− 2δ,
R(β̃)−R(β̂) ≤ λB + 4Radn(Hβ) + 2B √ log 2δ n .
Regarding the gap on the backdoor task, we first rewrite
R̃(W̃ , β̃)− R̃(W, β̄) = R̃(W̃ , β̃) + m nλ Rn(β̃) + mτ n2λ Pm(W̃ )
− mτ n2λ Pm(W̃ )− m nλ Rn(β̃)− R̃(W, β̄).
(8)
Invoking Lemma 2, with probability at least 1− δ, the followings hold
R̃(W̃ , β̃)− R̃(W, β̄) ≤ R̃n(W̃ , β̃) + m
nλ Rn(β̃) +
mτ n2λ Pm(W̃ )− mτ n2λ Pm(W̃ )− m nλ Rn(β̃)
− R̃(W, β̄) + 2Radn(GW,β) +B √ log 1δ 2n ,
≤ R̃n(W,β) + m
nλ Rn(β) + |
mτ n2λ Pm(W )− mτ n2λ Pm(W̃ )|
− m nλ Rn(β̃)− R̃(W, β̄) + 2Radn(GW,β) +B √ log 1δ 2n ,
(9)
≤ R̃n(W,β) + m
nλ Rn(β) +
2mτ
λ
− R̃(W, β̄) + 2Radn(GW,β) +B √ log 1δ 2n ,
(10)
where Eq. (9) holds by the definition of (W̃ , β̃) (minimizer) and Eq. (10) is because Pm(·) is upper bounded by 2n2.
Invoking Lemma 2 on R̃n(W,β) − R̃(W, β̄) in Eq. (10) with a union bound, with probability greather than 1− 2δ, we obtain
R̃(W̃ , β̃)− R̃(W, β̄) ≤ 2mτ λ + m nλ Rn(β) + 4Radn(GW,β) + 2B √ log 1δ 2n . (11)
For the term Rn(β), we have
m nλ Rn(β) = m nλ 1 n n∑ i=1 ℓ(fβ(xi), yi) ≤ m nλ Bn n = Bm nλ , (12)
where B is the uniform upper-bound on the loss function.
Combining Eq. (11) and Eq. (12), the following holds with probability at least 1− 2δ,
R̃(W̃ , β̃)− R̃(W, β̄) ≤ Bm nλ + 4Radn(GW,β) + 2B √ log 2δ n + 2mτ λ .
A.2 FORMAL STATEMENTS OF THEOREM 2
We consider linear classifiers of the form fβ(x) = β⊤x for β ∈ Rd, with ∥β∥1 ≤ W1. We assume ∥xi∥∞ ≤ 1 for i = 1, . . . , n. For the loss function, following (Boucheron et al., 2005), we take ℓ(fβ(X), Y ) = ϕ(−fβ(X)Y ) where ϕ : R → R+. Some common examples are ϕ(x) = log(1+ex) for logistic regression and ϕ(x) = max(0, x) for support vector machine.
Assumption 1. ϕ(·) is uniformly upper bounded by B. Assumption 2. ϕ(·) is Lipschitz with constant L i.e., ∥ϕ(x)− ϕ(y)∥ ≤ L∥x− y∥. Assumption 3. t(Z) = gW (Z)ϕ(Z) is Lipschitz with constant L1 i.e., ∥t(x)− t(y)∥ ≤ L1∥x− y∥. Theorem 3. (Linear Case) Suppose that the assumption 1, 2, and 3 hold. The followings hold with probability at least 1− 2δ:
1. Gap on the normal task:
R(β̃)−R(β̂) ≤ λB + 4LW1
√ log d
n + 2B √ log 2δ n ,
2. Gap on the backdoor task:
R̃(W̃ , β̃)− R̃(W, β̄) ≤ Bm nλ + 4L1W1
√ log d
n + 2B √ log 2δ n + 2mτ λ .
Corollary 1. Under the same assumptions of Theorem 3, by setting λ = Θ(1/ log n), m = Θ( √ n) and τ = Θ(1/n), two gaps will converge to zero with high probability as n → ∞.
Proof of Theorem 3 and Corollary 1. We will be using the following two lemmas.
Lemma 3. (Boucheron et al., 2005) For any β ∈ Rd, we have with probability at least 1− δ,
|Rn(β)−R(β)| ≤ 2LW1
√ log d
n +B √ log 1δ 2n .
Lemma 4. For any fixed W ∈ Rp, we have with probability at least (1− δ),
|R̃n(W,β)− R̃(W,β)| ≤ 2L1W1
√ 2 log d
n +B √ log 1δ 2n
for any β ∈ Rd.
The proof of Lemma 4 is attached at the end of proof of the main theorem.
The proof of Theorem 3 directly follows from Theorem 1 by using explicit forms of the Rademacher Complexity in Lemma 3 and Lemma 4.
Regrading the proof of Corollary 1, it is straightforward to check that every term in the two gaps of Theorem 3 will vanish as n → ∞ with specified hyperparameters.
A.3 PROOF OF LEMMA 4
Proof. We will use the following lemma.
Lemma 5 ((Boucheron et al., 2005)). Let F be the class of linear predictors, with the ℓ1-norm of the weights bounded by W1. Also assume that with probability one that ∥x∥∞ ≤ X∞. Then
Radn(F) ≤ X∞W1
√ 2 log d
n ,
where d is the dimension of data and n is the sample size.
Back to the main proof, recall by definition,
R̃n(W,β) = 1
n n∑ j=1 1{yj ̸= 1}gW (xj)ϕ(ỹfβ(xj)).
Since ϕ is uniformly upper bounder by B and gW is upper bounded by one, then by Lemma 2,
|R̃n(W,β)− R̃(W,β)| ≤ 2Radn(GW,β) +B √ log 1δ 2n ,
holds with probability at least 1 − δ where Radn(GW,β) = Eσ[supβ∈Rd 1n ∑n
j=1 σj1{yj ̸= 1}gW (xj)ϕ(fβ(xj))] with P (σi = 1) = P (σi = −1) = 0.5 for i = 1, . . . , n. Without loss of generality, we assume that that there are s samples with ground-truth label 1 with index n− s+ 1, . . . , n. Thus,
2Radn(GW,β) = 2Eσ[ sup β∈Rd
1
n n−s∑ j=1 σjgW (xj)ϕ(fβ(xj))] (13)
≤ 2L1 n Eσ[ sup β∈Rd n−s∑ j=1 σjfβ(xj)], (14)
≤ 2L1W1
√ 2 log d
n , (15)
where Eq. (14) holds by Lipschitz Composition Principle (Boucheron et al., 2005) and Eq. (15) is by Lemma 5.
B ALGORITHMS
In this section, we present algorithms for implementing the proposed DLP. We adopt the state-of-theart techniques in the MTL literature (Sener & Koltun, 2018; Kaiser et al., 2017; Zhou et al., 2017). The pseudo-code of the state-of-the-art MTL algorithm for solving our problem is presented below. The main idea of the algorithm is as follows. We first run gradient descent algorithms on each task in Lines 2-3. Then, to ensure that the overall objective value is decreased, we further update the shared parameter β in Lines 4-5. The rationale for obtaining particular α to ensure the decrease of the overall object value can be found in (Sener & Koltun, 2018). Regarding the convergence analysis, under reasonable conditions, the Procedure 1 is shown to find Pareto stationary points or local/global optimal points. We refer the interested readers to the references (Sener & Koltun, 2018; Kaiser et al., 2017) for details.
Algorithm 1 Input: Initialization: Model Parameter β1, Selection Parameter W 1, Hyperparameters λ, τ and stepsizes
{ηi}T−1i=1
1: for t = 1, . . . , T − 1 do 2: βt = βt − ηt∇βL1(βt) // L1(β) := 1/n ∑n i=1 ℓ(fβ(xi), yi)
3: W t+1 = W t − ηt∇WL2(W t, βt) // L2(W,β) := τ/nPm(W ) + λ/m ∑n
j=1 1{yj ̸= ỹ}gW (xj)ℓ(fβ(xj), ỹ)
4: Obtain αt1 and αt2 with Procedure 2 5: βt+1 = βt − η(αt1∇βL1(βt) + αt2∇βtL2(W,β)) 6: end for
Output: βT , WT
C COMPLEMENTARY EXPERIMENTAL RESULTS
We provide detailed experimental results that are omitted in the main text due to the page limit in this section.
Algorithm 2 Weight Solver Input: Initialization: α = ( α1, α2 ) = ( 1 2 , 1 2 ) , W and β
1: Compute M st. M1,2 = (∇βL1(β))⊤(∇βL2(W,β)), M2,1 = (∇βL2(W,β))⊤(∇βL1(β)) 2: Compute t̂ = argminr ∑ t α
tMrt 3: Compute γ̂ = argminγ ((1− γ)α+ γet̂)
⊤ M ((1− γ)α+ γet̂) 4: α∗ = (1− γ̂)α+ γ̂et̂
Output: α∗
C.1 SELECTED SAMPLES FOR CIFAR10
For task II, we demonstrate the categories of the top 1% selected backdoor training samples in Fig. 4a, and several images of the top-3-category selected training backdoor samples associated with the backdoor label ‘Cat’, ‘Deer’ and ‘Truck’ in Fig. 4b. Similar to the task I in the main text, the top-3-category backdoor samples are semantically consistent with their target labels. For example, the images of ‘Deer’ and ‘Dog’ resemble the images of ‘Horse’ most than any other categories in the Fashion-MNIST dataset. Such observations provide concrete evidence to support the conjecture that backdoor images should be similar to their backdoor label category (Bagdasaryan et al., 2020). The similarity is measured in terms of semantic meanings in our case.
C.2 TEST PERFORMANCES ON GTSRB
In this section, we test the proposed method on the GTSRTB dataset. There are in total 43 types of traffic signs (labels) in GTSRB, and many of them are of the same type, e.g., “20 speed”,“30 speed”. For the sake of clarity, we only select one of each types for testing.
C.3 COMPARISON WITH STATE-OF-THE-ART ON CIFAR10
For semantic backdoor attacks, following the framework in (Bagdasaryan et al., 2020), we relabel a whole category of the training data with the same semantic meaning. For example, we can relabel images of Top (with ground-truth label 1) with label 0. For edge-case attacks, we follow the ideas in (Wang et al., 2020) to first create a new training dataset by excluding the samples of one whole category. Then, we relabel the samples of the previously excluded sub-category and added them to the previous training data to form a new training data.
We follow the same setup for Task II in the main text. The two SOTA methods to be compared are listed below. For semantic attacks, the work of (Bagdasaryan et al., 2020) relabeled a whole category of the training data. For edge-case attacks, the authors in (Wang et al., 2020) first relabeled the images of Southwest Airplanes (NOT in CIFAR10) as Truck and then injected the relabeled sample-label pairs into the training data. For completeness, we also test for different backdoor labels as summarized in Table 6. The proposed DLP consistently outperforms the SOTA clean-sample attack for all backdoor target labels. Also, the proposed DLP is comparable with the more powerful edge-case attack in some cases, e.g., a backdoor label of ‘Dog’.
Method DLP Semantic Edge-case
C.4 EXPERIMENTAL RESULTS UNDER DEFENSE MECHANISMS
In this section, we test the proposed method under certain defenses. As mentioned in the main text, because of the weak threat model of (centralized) clean-sample backdoor attacks, many existing defenses against perturbation-based attacks are inappropriate for defending against our attacks.
But defenses against label flipping attacks, e.g., label sanitization (Paudice et al., 2018) and label ceritication (Rosenfeld et al., 2020) can be suitably tailored to defend against our method. The results are summarized in Fig. 6 and Fig. 7 respectively. It can be concluded from the figure that the proposed method can escape the two defenses. | 1. What is the focus of the paper regarding backdoor attacks?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis?
3. What are the weaknesses of the paper, especially regarding the formulation of the scoring function and its dependence on the training data?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper focus on the clean-sample backdoor attack, which does not modify images but just flips image labels. One key step of clean-sample attack is how to select samples/features to conduct label poisoning. This paper proposes a scoring mechanism to select proper samples, which allows to effectively select samples for arbitrary backdoor sample size.
Strengths And Weaknesses
positives:
clean-sample backdoor attack is an interesting backdoor attack, which is pretty stealthy. Thus it needs to be explored in advance.
theoretically analysis is given to justify the propose algorithm could leads to a proper clean-sample attack.
negatives:
the proposed method is straightforward, which just introducing a term to learn the scoring function, but the formulation of scoring function g_w is less discussed. It could be an neural network or even a linear function. More important, the proposed method would heavily depends on the training data, since it just select samples from them. As a result, it is hard to control the final performance if the training dataset is not easily to be backdoored. I suggest to introduce some new samples into the dataset which could efficiently improve the performance of the attack.
Clarity, Quality, Novelty And Reproducibility
It is well written, and easy to follow. My main concern is the lack of novelty, and the proposed framework has a strong limitation, as mentioned above (i.e., its performance heavily depends on the training dataset.) |
ICLR | Title
DLP: Data-Driven Label-Poisoning Backdoor Attack
Abstract
Backdoor attacks, which aim to disrupt or paralyze classifiers on specific tasks, are becoming an emerging concern in several learning scenarios, e.g., Machine Learning as a Service (MLaaS). Various backdoor attacks have been introduced in the literature, including perturbation-based methods, which modify a subset of training data; and clean-sample methods, which relabel only a proportion of training samples. Indeed, clean-sample attacks can be particularly stealthy since they never require modifying the samples at the training and test stages. However, the state-of-the-art clean-sample attack of relabelling training data based on their semantic meanings could be ineffective and inefficient in test performances due to heuristic selections of semantic patterns. In this work, we introduce a new type of clean-sample backdoor attack, named as DLP backdoor attack, allowing attackers to backdoor effectively, as measured by test performances, for an arbitrary backdoor sample size. The critical component of DLP is a data-driven backdoor scoring mechanism embedding in a multi-task formulation, which enables attackers to simultaneously perform well on the normal learning tasks and the backdoor tasks. Systematic empirical evaluations show the superior performance of the proposed DLP to state-of-the-art clean-sample attacks.
1 INTRODUCTION
The backdoor attack has been an emerging concern in several deep learning applications owing to their broad applicability and potentially dire consequences (Li et al., 2020). In a high level, a backdoor attack implants triggers into a learning model to achieve two goals simultaneously: (1) to lead the backdoored model to behave maliciously on attacker-specified tasks with an active backdoor trigger, e.g., a camouflage patch as demonstrated in Fig. 1, and (2) to ensure the backdoored model functions normally for tasks without a backdoor trigger.
One popular framework is the perturbation-based backdoor attack (PBA) (Gu et al., 2017; Chen et al., 2017; Turner et al., 2019; Zhao et al., 2020; Doan et al., 2021a;b). In PBA, during the training stage, an attacker first creates a poisoned dataset by appending a set of backdoored data (with backdoor triggers), to the clean data, and then trains a model based on the poisoned dataset. In the test stage, the attacker launches backdoor attacks by adding the same backdoor trigger to the clean test data.
The requirement of accessing and modifying data, including both features and labels, during the training and test stages in PBA could be unrealistic under several applications. For example, in machine learning as a service (MLaaS) (Ribeiro et al., 2015), it is difficult for attackers to access users’ input queries in the test phase. Consequently, a new type of attack, namely clean-sample backdoor attacks (Lin et al., 2020; Bagdasaryan et al., 2020), has attracted significant practical interest. In clean-sample backdoor attacks, the attacker changes labels instead of features in the training stage only as illustrated in Fig. 1 and summarized in Table 1. The state-of-the-art (SOTA) clean-sample attack, known as the semantic backdoor attack (Bagdasaryan et al., 2020), first looks for images with particular semantic meaning, then relabels all the training images with the semantic meaning, e.g., green car in the CIFAR10 dataset, to attacker-specified labels. Finally, the attacker trains a classifier based on the modified data. In the inference stage, no further operations are needed the attacker.
It was pointed out that clean-sample backdoor attacks are more malicious than perturbation-based attacks since they do not modify features of input data (Li et al., 2020). Nevertheless, the SOTA clean-sample method, namely the semantic backdoor attack, is possibly limited in terms of backdoor
20
selection for the following reasons. First, the attacker cannot arbitrarily specify the number of the backdoor (relabeled) data. Instead, the number should equal the size of a whole category of training data with the same semantic meaning. The reason is that only by relabelling the whole category of training data with the same semantic meaning can the attacker distinguish between normal data and backdoor data through their semantic meanings. Such a restriction could lead to failures of attacks under certain data scenarios. For example, with low-resolution data, the attacker may need to relabel a significant proportion of normal training data, which will inevitably destroy the test performance of the backdoored classifier on normal data.
Second, the criterion for selecting semantic patterns are heuristic and may vary drastically among different attackers. As a result, test performances will change significantly. To find the best backdoor criterion, the attacker will try every semantic combination with brute force methods. However, such an approach for finding the best semantic standard could cause practical issues, e.g., computational infeasibility. For instance, there are roughly 4× 1067 ways to select 20 categories out of a total of 20000 categories in the ImageNet (Krizhevsky et al., 2012) for a semantic backdoor attack.
In light of the aforementioned issues, we propose a new type of clean-sample attack named DataDriven Label-Poisoning (DLP) backdoor attack. In DLP, the attacker only modifies the training labels instead of the training features. In contrast to heuristic selections of backdoor patterns in current semantic attacks, in DLP, the attacker selects backdoor samples via a scoring mechanism. Each training sample will be assigned a value by the scoring mechanism to reflect its backdoor effectiveness, measured by the test performances on both the normal and backdoor tasks. Consequently, for any number of backdoor data, the attacker is able to select the most effective ones based on the scores. To the best of the authors’ knowledge, we are the first to mathematically apply the ideas of scoring mechanisms to formulate a backdoor problem and offer corresponding theoretical justifications.
Our contributions in this work are summarized below. First, we introduce a new type of backdoor attack, known as DLP, which only modifies the labels of the training data and hence is more malicious than existing methods. In addition, the proposed DLP backdoor attack enables attackers to select backdoor samples at any level of the backdoor budget (namely the number of backdoor sample) effectively. Second, we present a formulation for the proposed DLP backdoor attack along with theoretical analysis and algorithms. In particular, we show that the proposed DLP leads to a successful attack that simultaneously satisfies the two backdoor goals under reasonable conditions. Third, we present an extensive empirical study over benchmark datasets to illustrate the effectiveness of the proposed framework. We find that experimental results align with existing conjectures and provide insights on efficiently designing backdoor attacks.
The rest of the paper is organized as follows. First, an overview of the related literature is given in Section 2. Then, we formally introduce the DLP in Section 3. Next, we present an implementation of
the proposed DLP and corresponding theoretical analysis in Section 4. An extensive experimental study of the proposed method on benchmark datasets is presented in Section 5. Finally, we conclude the paper in Section 6.
2 RELATED WORK
Data poisoning/Label flipping attacks Data poisoning attacks, including manipulate training features (Biggio et al., 2012; Koh & Liang, 2017; Jagielski et al., 2018; 2021; Weber et al., 2020) and flipping labels (Paudice et al., 2018), aim to achieve attackers’ adversarial goals. Classical data poisoning attacks are untargeted, which aim to deteriorate the overall prediction performances of the classifier (Gao et al., 2020). For example, early backdoor attacks contaminated training data to decrease the overall test accuracy of the trained supported vector machines (Biggio et al., 2012).
Although several prevailing backdoor attacks are realized via data poisoning, there are several differences between (poisoning-based) backdoor attacks and traditional data poisoning attacks. Backdoor attacks are targeted. A backdoored model only misbehaves maliciously on specified tasks in the presence of backdoor triggers while retaining the overall test accuracy of its primary tasks. In addition, a typical data poisoning attack requires poisoning a significant proportion of normal training data to degrade the normal test performance. In contrast, a backdoor attack often only allows modifications on a small subset of training data to maintain good performances on the normal tasks (Li et al., 2020).
Clean-sample backdoor attacks Unlike perturbation-based backdoor attacks (PBA) which modify both the features and labels during the training and the test stage, clean-sample backdoor attacks (Bagdasaryan et al., 2020) (SBA) only poison the labels in the training stage. On the one hand, SBA do not require accessing and changing the features in both training and test stages and therefore are practically stealthier than PBA. On the other hand, SBA only poison the labels of clean data and hence are less powerful than PBA. For example, clean-sample backdoor attacks typically can not simultaneously achieve perfect accuracy on both clean and backdoored test data.
Multi-task learning Multi-task learning (MTL) (Baxter, 2000; Li et al., 2014; Xue et al., 2007; Ruder, 2017) refers to learning several different tasks via a single model. MTL is often implemented in hard and soft parameter sharing. For soft-parameter sharing, all the parameters are private and specific to different tasks. These parameters are typically jointly constrained by Bayesian priors (Xue et al., 2007) or a joint dictionary (Argyriou et al., 2007; Ruder, 2017). In hard-parameter sharing, each task shares some parameters and has task-specific parameters. Our work falls under the umbrella of hard-parameter sharing. Both the normal and backdoor tasks will share the parameter of the learning model, but the backdoor task owns the selection parameter privately. The proposed DLP can be considered a particular type of MTL because there are two goals that are learned jointly with hard parameter sharing.
3 DLP
We consider a classification scenario, where x ∈ X ⊆ Rd, y ∈ Y = [1, . . . ,K], ỹ ∈ Y , denote the features, label, and backdoor target label, respectively. We denote {(xi, yi)}ni=1 as i.i.d training data from a distribution PX,Y . Let fβ : X → Y denote the classifier parameterized by β ∈ Rp. We let ℓ(·, ·) denote the loss function that evaluates the discrepancy between the true label and predicted label. We use ∥ · ∥1 , ∥ · ∥2 and ∥ · ∥∞ to denote the ℓ1, ℓ2 and ℓ∞ vector norm respectively.
3.1 THREAT MODEL
Attacker’s Capacities We consider a clean-sample backdoor attack scenario where an attacker can only access and modify the training labels, but not the training features. We emphasize that the number of training data will remain the same after poisoning.1 Besides, the attacker has control over the training process. However, the attacker does not have control over any procedures in the test stage, including modifying data and deploying model on cloud. The discussed scenario can happen in several real-world applications, e.g., “Outsourcing Training” (Gao et al., 2020). Overall speaking, the threat model of our method poses considerably weaker requirements than the perturbation-based backdoor attacks, and therefore it is stealthier than the current perturbation-based backdoor attacks.
3.2 PROPOSED METHOD
We first briefly outline the overall process of clean-sample backdoor attacks in the followings. In the training stage, attackers first relabel a subset of clean data (denoted as DB) to attacker-specified target label(s), e.g., relabeling images of ‘green cars’ as ‘frogs’. Next, attackers train a classifier to learn those poisoned data (DB) and the remaining clean data. During the test stage, attackers do not need to modify the test data to launch backdoor attacks, unlike perturbation-based attacks where the attacker needs to patch the test data to launch backdoor attack. The reason is that any test input with the same features as DB, e.g., images of ‘green cars’, is expected to be automatically predicted as the attacker-specified target label(s) by the backdoored model.
The most challenging part in clean-sample attacks is to effectively and efficiently select the features in DB to serve as backdoor triggers. Current methods apply the semantic pattern of green cars as backdoor triggers (Bagdasaryan et al., 2020). But the use of semantic patterns as backdoor triggers could be possibly limited in terms of effectiveness and efficiency, such as a restricted number of backdoor data and ineffective selections of backdoor triggers (see Section 1 for detailed discussions). To solve those issues, we propose a data-driven backdoor selection mechanism that enables attackers to select any number of backdoor data effectively. To be precise, the attacker chooses backdoor data via a scoring map that takes features as input and outputs a score
gW : X → [0, 1],
with W ∈ Rq being the parameter. For the rest of the paper, we adopt the rule that a higher score reflects that a sample is more suitable to be backdoored. Intuitively, one can treat the selection mechanism gW as a soft binary classifier for deciding whether an input should be selected as a backdoor candidate. To select backdoor data at any attacker-specified level, the attacker can first sort the scores of all the samples, namely {gW (xi)}ni=1, in descending order, and then pick any attacker-specified quantile of data as backdoor samples.
Next, we will elaborate on how to incorporate the proposed scoring method into the training pipeline. Recall that one of attackers’ goals is to obtain a high accuracy on backdoor data. To fulfill this goal, the attacker should select a backdoor scoring mechanism gW such that the following empirical backdoor risk is minimized for given data {(xi, yi)}ni=1, model β, and a backdoor label ỹ ∈ Y:
R̃n(W,β) := 1
n n∑ i=1 1{yi ̸= ỹ}gW (xi)ℓ(fβ(xi), ỹ), (1)
subject to (C) ∑n
j=1 1{yj ̸= ỹ}gW (xj) = m and gW (xj) ∈ {0, 1} for j = 1, . . . , n. Note that indicators of 1{yi ̸= ỹ} for i = 1, . . . , n are included in the constraint. Applying these terms is because we can not backdoor (relabel) a sample to its originally ground-truth label class. For example, it is meaningless to relabel an image of the cat as a cat for a backdoor attack.
As it is generally accepted in the backdoor literature, attackers should simultaneously achieve high clean accuracy and backdoor accuracy. To meet such a requirement, we use a multi-task learning formulation with weighted summation to design backdoor attacks. The first task is associated with the normal learning task, and the second task corresponds to the backdoor task. Given normal training data {(xi, yi)}ni=1 and a backdoor target label ỹ, the DLP backdoor attack is a pair of minimizer
1This is fundamentally different from the case of perturbation-based attacks where backdoor training data will be injected to the clean training data and therefore the number of total training data will increase.
(W̃ , β̃) of the following:
min W∈Rq,β∈Rp
1
n n∑ i=1 ℓ(fβ(xi), yi) + λ m n∑ j=1 1{yj ̸= ỹ}gW (xj)ℓ(fβ(xj), ỹ) (2)
subject to (C) n∑
j=1
1{yj ̸= ỹ}gW (xj) = m and gW (xj) ∈ {0, 1} for j = 1, . . . , n,
where m is the number of backdoor samples, and λ > 0 is a regularizing coefficient. Remark 1. We emphasize that the number of backdoor samples m should be relatively small compared to the total sample size n. The reason is that an excessive number of backdoor samples will eventually lead to a trivial classifier for normal test data. Thus, the first goal of backdoor attacks will not be met.
Finally yet importantly, we will now discuss the test pipelines of DLP. Given an attacker-specified number of backdoor data m and a trained score mechanism gW , we first sort the scores of training data {gW (xi)}ni=1 in descending order and set gW (xi)(m), namely the m-th largest score, to be a threshold for selecting test backdoor data. For a test input x, if its score gW (x) is greater than gW (xi)(m), then it will be marked as a backdoor sample.
4 CONTINUOUS IMPLEMENTATION AND THEORETICAL RESULTS
4.1 CONTINUOUS IMPLEMENTATION
Directly solving problem (2) is challenging. The main difficulty comes from the fact that {gW (xi)}ni=1 are required to take values in a discrete set, which is not directly compatible with the popular gradientbased optimization framework. To tackle this issue, we first consider an unconstrained version of problem (2) and then propose a regularization term to enforce the selection mechanisms to fulfill the constraint (C). In particular, we consider
Pm(W ) = ( n∑ j=1 1{yj ̸= ỹ}gW (xj)−m)2 + n∑ j=1 gW (xj)(1− gW (xj)),
where m is an attacker-specified number. It is straightforward to observe that the proposed term Pm(W ) equals zero if and only if the constraint (C) is satisfied. In other words, the proposed regularization term will lead a selection mechanism to satisfy the constraint (C) if and only if it is precisely minimized. Consequently, we now propose to consider the following continuous and unconstrained minimization problem:
min W,β
1
n n∑ i=1 ℓ(fβ(xi), yi) + λ m n∑ j=1 1{yj ̸= ỹ}gW (xj)ℓ(fβ(xj), ỹ) + τ n Pm(W ), (3)
where λ, τ > 0 are tuning parameters. One can apply classical gradient-based methods to solve the above problem. Nevertheless, classical optimization methods may lead to convergence issues. We include the rigorous descriptions of the issues and algorithms to solve them in the appendix.
4.2 THEORETICAL RESULTS
In this section, we show that, under reasonable conditions, the proposed DLP leads to an attack that satisfies the two backdoor goals mentioned in last section. All the proof is included in the supplement due to the page limit. We build theoretical results based on the classical notion of the Rademacher Complexity in classical learning theory. The (empirical) Rademacher complexity of the function class F with respect to a probability distribution P over an input space X for i.i.d. sample {xi}ni=1 with size n is: Radn(F) := n−1Eσ [ supf∈F ∑n i=1 σif (xi) ] , where the inner expectation is taken over σ = {σ1, σ2, · · · , σn} and they are independent random variables following the Rademarcher distribution, i.e., P (σi = 1) = P (σi = −1) = 1/2. Next, we present results on the generalization bounds for both normal and backdoor tasks in the followings. We consider a binary classification problem with label set Y := {1,−1} and the backdoor target label ỹ = 1. The test performances are evaluated through R̃(β,W ) := EPX,Y 1{Y ̸=
ỹ}gW (X)ℓ(fβ(X), ỹ) and R(β) = EPX,Y ℓ(fβ(X), Y ) respectively. Denote (W̃ , β̃) to be a backdoor attack obtained by solving problem (3). Also, for ease of notation, we define two new family of functions namely Hβ = {ℓ(fβ(X), Y )|β ∈ Rd} and GW,β = {1{Y ̸= ỹ}gW (X)ℓ(fβ(X), Y )|β ∈ Rd,W ∈ Rp}. Additionally, we will need the following assumption regrading the loss function. Assumption 1. The loss function ℓ(·, ·) is uniformly upper bounded by B. Theorem 1. Suppose that the assumption 1 holds. The followings hold with probability at least 1− 2δ:
• Gap on the normal task: R(β̃)−R(β̂) ≤ λB + 4Radn(Hβ) + 2n−1/2B √ log 1/δ, where
β̂ := argminβ n −1 ∑n i=1 ℓ(fβ(xi), yi) is the normal classifier;
• Gap on the backdoor task: R̃(W̃ , β̃)−R̃(W, β̄) ≤ 4Radn(GW,β)+2n−1/2B(log δ−1)0.5+ 2mτ/λ+Bm/nλ, where (W, β̄) is the optimal backdoor choice, i.e., the minimizier of (1).
The first bound is about the gap in normal test performances between the DLP classifier and the normal classifier. The second bound describes the differences between backdoor test performances of the DLP and the optimal backdoor choice (defined in Section 3.1). The task-priority hyperparameter λ appears in both two loss terms. A small λ tends to generate a backdoor classifier fβ̃ that resemble the normal classifier fβ̂ more. In contrast, an excessively large λ pushes the backdoor classifier fβ̃ to behave similarly to the optimal backdoor classifier fβ .
Two upper bounds also depend on the Rademacher complexity of function classes. If the Rademacher complexity of the function class is bounded, we can obtain vanishing upper bounds by appropriately selecting the hyperparameters. We provide an informal result for linear classifiers in the following. The formal statements and the proof are included in the supplement. Theorem 2. (Informal) Suppose that fβ is a linear classifier. Then, under specific assumptions and appropriately chosen hyperparameters, the gap on normal and backdoor tasks converge to zero with high probability as sample size n → ∞.
5 EXPERIMENTAL STUDY
This section systemically evaluates the proposed method on synthetic data and benchmark datasets. Our empirical study shows the effectiveness of the proposed DLP and improved performances compared with state-of-the-art (SOTA) methods. Interestingly, we demonstrate that the backdoor samples selected by the DLP are semantically close to those from the target labels, which is aligned with existing conjecture (Bagdasaryan et al., 2020). Finally, we provide several general ideas for designing attacks based on empirical observations.
We will use two test measurements throughout this section. The normal accuracy (abbreviated as AccN) is the test accuracy on the normal data, and the backdoor accuracy (abbreviated as AccB) represents the test accuracy on the selected backdoor data. The method for selecting backdoor data is described in the last paragraph in Section 3.
5.1 SYNTHETIC DATA
We evaluate the proposed method on synthetic data. For the training data, we generate 500 i.i.d samples from the normal distribution N ([0, 3], [[2, 0], [0, 2]]) with label 0 and 500 i.i.d samples from the normal distribution N ([0,−3], [[2, 0], [0, 2]]) with label 1 respectively. For the test data, we generate 2500 i.i.d samples for each label class. The classifier fβ is linear, and the loss function ℓ is the logistic loss. Also, the backdoor target ỹ is 1, and the number of backdoor sample m = 50.
Test Performance We summarize test performances under different λ in Table. 2. By setting λ to 0, the objective is to minimize the normal empirical risk only, and hence the resulted classifier is the normal classifier. Similarly, by setting λ to be sufficiently large, e.g., ∞, the resulted classifier is the optimal backdoored classifier. For a wide range of λ, the proposed DLP delivers comparably high accuracy on both normal
and backdoor data compared with the normal and backdoored classifier, respectively.
Selected backdoor sample We plot the normal training data (in orange dots and blue triangles), the selected training backdoor data (in green triangles), the normal classifier (in solid red line), and the backdoor classifier (in gray dash-dot line) with λ = 1 in Fig. 2. The selected backdoor points stay close to the normal classifier, and the DLP classifier moves down compared to the normal classifier. Such an observation can be interpreted in the following way. By relabelling those points near the decision boundary, the attacker slightly shifts the normal classifier to account for those backdoored samples without severely degrading the normal test accuracy. This phenomenon has also been independently discovered and investigated in the early work of poisoning support vector machines (Biggio et al., 2012) and the active learning literature (Settles, 2009), where the goal is to find the most effective and efficient samples to train a classifier.
5.2 REAL-WORLD DATASETS
Tasks We consider the following tasks: (I) 10-class classification problem on the Fashion-MNIST (Xiao et al., 2017) with a LeNet (LeCun et al., 2015), (II) 10-class classification problem on the CIFAR10 (Krizhevsky et al., 2009) data with a Resnet18 (He et al., 2016), and (III) classification problem on GTSRB (Stallkamp et al., 2011). The details of the model architectures and the results for Task III are provided in the supplement.
Backdoor Setup For task I (II), we set the number of backdoor samples m to be 600 (500), and 6000 (5000), which accounts for 1% and 10% of the total training data respectively. For both tasks, the selection mechanism is a two-layer neural network. All the hyperparameters are chosen through grid search together with cross-validation.
Single Task Performance For task I, the AccN of the normal classifier is 92.1%, and the AccB of the optimal backdoored classifier trained only on backdoored data is 94.2%. For task II, the AccN of the normal classifier is 92.1%, and the AccB of the optimal backdoored classifier 93.0%.
5.2.1 TEST PERFORMANCE
For every backdoor target label, we summarize the test performances of DLP in Table. 3 for Task I with m = 600, and for Task II with m = 500. We include the results for other choices of m in the next subsection. Both the normal and backdoor test accuracy of our DLP are comparably high compared to that of the single-task performance, which reflects the effectiveness of the proposed DLP. In task I (Fashion-MNIST), for backdoor target label ‘Sandal’, ‘Sneaker,’ and ‘Ankle Boot’ (referred as ‘Boot’), both the AccN and BccN exceed the test performances of other backdoor labels by a great margin. This observation implies that labels of the shoe category are the most effective for backdoor. Such a result may be because images of ‘Sandal’, ‘Sneaker,’ and ‘Boot’ are semantically similar.
5.2.2 SELECTED BACKDOOR SAMPLES
For task I, we demonstrate the categories of the top 1% selected backdoor training samples in Fig. 3a, and several images of the top-3-category selected training backdoor samples associated with backdoor label ‘Sneaker’ , ‘Trouser’ and ‘Shirt’ in Fig. 3b.
Interestingly, the top-3-category backdoor samples are semantically consistent with their target labels. For example, the images of ‘Sandal’ and ‘Boot’ resemble the images of Sneakers most than any other categories in the Fashion-MNIST dataset. Additionally, images from ‘Dress’ and ‘T-Shirt’ look like those of ‘Shirt’. Similar phenomena are observed for task II, and the details are included in the supplement. Such observations provide concrete evidence to support the conjecture that backdoor images should be similar to their backdoor label category (Bagdasaryan et al., 2020). The similarity is measured in terms of semantic meanings in our case. Based on the discussion, we suggest that, for clean-sample attacks, one should look select ‘similar’ images for backdoors.
5.2.3 EFFECTIVENESS UNDER VARYING BACKDOOR SIZES
We demonstrate the merit of our proposed method on effectively selecting any number of backdoor sample in this section. In particular, we test on several different ratios of the backdoor sample size and summarize the results in Table 4. From the threat model in Section 3.1, the total sample size remains the same after poisoning. Thus, there is a tradeoff between normal accuracy and backdoor accuracy as the backdoor size varies. For example, if the attacker flips 99% of the total training data, then the backdoored classifier will delivery trivial test accuracy on clean test data. Regarding to the empirical results, we observed that the proposed method deliveries both high benign and backdoor accuracy under a reasonable range choices of m, e.g., from 1% to 25%.
5.2.4 COMPARISON WITH STATE-OF-THE-ART METHODS & DEFENSES
We first compare the proposed DLP with two state-of-the-art methods, and then test the performances of DLP under certain defenses. As mentioned in Section 3, the threat model of our/clean-sample attack is more weaker than the perturbation-based attacks. As a result, the most suitable method for comparison is the SOTA clean-sample attack (“Semantic attack”). For the sake of completeness, we also compare our method with the more powerful PBA (“edge-case attack”). Empirically, we find that the DLP outperforms the SOTA clean-sample attack and is comparable with the more powerful edge-case attack in some cases.
Task I Due to the low-resolution property of the Fashion-MNIST dataset, one may not obtain finegrained categories of images other than the original ten classes. Hence, we will compare our proposed method with two SOTA methods on a binary classification task. The new task is to predict if a fashion object is a piece of clothing with label 0 (including ‘Top’, ‘Trouser’, ‘Pullover’, ‘Dress’, ‘Coat’, and ‘Shirt’) or an accessory with label 1 (including ‘Sandal’, ‘Sneaker’, ‘Bag’, ‘ Boot’). We selected a LetNet classifier whose normal accuracy is 97.2%.
We include the backdoor data preparation process for both “edge-case attack” and “semantic attack” in the appendix. The low-resolution property of the Fashion-MNIST dataset leads the above two SOTA methods to coincide with each other. In the following, we set the backdoor target label to be 0. Then, we tested all the possible categories for backdoor samples and summarized the result in Table. 5. The AccN and AccB of the proposed DLP are 94.1% and 94.7%, respectively. We observe that, for a given AccN of 93%, the proposed DLP obtains the highest AccB against the SOTA methods. Alternatively, for a given AccB of 92%, the AccN of the proposed DLP is slightly lower than the SOTA methods (94.5%). The result is because edge-case attacks modify the training data, which should be more potent than our proposed methods. We follow the same setup above to conduct experiments on Task II and Task III. We observe similar results as in the Fashion-MNIST, and include the details in the supplement.
Defenses Because of the weak threat model of our method, many existing defenses against perturbation-based attacks are inappropriate for defending against our attacks. But defenses against label flipping attacks, e.g., label sanitization (Paudice et al., 2018) can be suitably tailored to defend against our method. We found that those defenses are ineffective against our methods. Details are included in the supplementary material.
6 CONCLUSION
In this work, we proposed a new type of clean-sample backdoor attack known as DLP. The proposed DLP allows the attacker to choose any number of backdoor samples effectively in terms of test performance. The key ingredient of developing the DLP is a multi-task learning formulation, enabling the attacker to perform well on both normal and backdoor tasks. There are several interesting future problems. One direction is to characterize the optimal backdoor sample theoretically. Finally, on the defense side, it is necessary to create a new defense mechanism to defend the DLP. The supplementary material contains proofs and more experimental studies.
Appendix for DLP: Data-Driven Label-Poisoning Backdoor Attack
We include the proof of Theorem 1 and the formal statement of Theorem 2 (and its proof) in Section A. The pseudo-code of algorithms for implementing DLP and the convergence analysis are presented in Section B. Complementary results of the experimental study are included in Section C. Finally, We also present additional experimental results, including investigations on the effect of different sizes of the backdoor sample and results on more complex datasets.
A PROOF
A.1 PROOF OF THEOREM 1
We will rely on the following two lemmas. Lemma 1. (Boucheron et al., 2005) For any β ∈ Rd, we have with probability at least 1− δ,
|Rn(β)−R(β)| ≤ 2Radn(Hβ) +B √ log 1δ 2n .
Lemma 2. (Boucheron et al., 2005) For any fixed W ∈ Rp, we have with probability at least 1− δ,
|R̃n(W,β)− R̃(W,β)| ≤ 2Radn(GW,β) +B √ log 1δ 2n
for any β ∈ Rd.
Back to our main results, we first bound the gap on the normal task.
Invoking Lemma 1, with probability at least 1− δ, we have
R(β̃)−R(β̂) ≤ Rn(β̃)−R(β̂) + 2Radn(Hβ) +B √ log 1δ 2n ,
= Rn(β̃) + nλ
m R̃n(W̃ , β̃) +
τ n Pm(W̃ )− nλ m R̃n(W̃ , β̃)
− τ n Pm(W̃ )−R(β̂) + 2Radn(Hβ) +B √ log 1δ 2n ,
≤ Rn(β̂) + nλ
m R̃n(W, β̂) +
τ n Pm(W )− nλ m R̃n(W̃ , β̃)
− τ n Pm(W̃ )−R(β̂) + 2Radn(Hβ) +B √ log 1δ 2n ,
(4)
≤ Rn(β̂)−R(β̂) + nλ
m R̃n(W, β̂) + 2Radn(Hβ) +B √ log 1δ 2n , (5)
where Eq. (4) holds by definitions of (W̃ , β̃) (minimizer) and Eq. (5) holds by definitions of W .
Invoking Lemma 1 on Rn(β̂)−R(β̂) in Eq. (5) with a union bound, with probability greather than 1− 2δ, we have
R(β̃)−R(β̂) ≤ nλ m R̃n(W, β̂) + 4Radn(Hβ) + 2 √ log 1δ 2n . (6)
From the definition of W , there are only m non-negative terms in R̃n(W, β̂) and hence
nλ m R̃n(W, β̂) = nλ m 1 n n∑ i=1 1{yi ̸= ỹ}gW (xi)ℓ(fβ̂(xi), yi) ≤ nλ m m n B = λB, (7)
where B is the uniform upper-bound on the loss function.
Combining Eq. (7) and Eq. (6), the following holds with probability at least 1− 2δ,
R(β̃)−R(β̂) ≤ λB + 4Radn(Hβ) + 2B √ log 2δ n .
Regarding the gap on the backdoor task, we first rewrite
R̃(W̃ , β̃)− R̃(W, β̄) = R̃(W̃ , β̃) + m nλ Rn(β̃) + mτ n2λ Pm(W̃ )
− mτ n2λ Pm(W̃ )− m nλ Rn(β̃)− R̃(W, β̄).
(8)
Invoking Lemma 2, with probability at least 1− δ, the followings hold
R̃(W̃ , β̃)− R̃(W, β̄) ≤ R̃n(W̃ , β̃) + m
nλ Rn(β̃) +
mτ n2λ Pm(W̃ )− mτ n2λ Pm(W̃ )− m nλ Rn(β̃)
− R̃(W, β̄) + 2Radn(GW,β) +B √ log 1δ 2n ,
≤ R̃n(W,β) + m
nλ Rn(β) + |
mτ n2λ Pm(W )− mτ n2λ Pm(W̃ )|
− m nλ Rn(β̃)− R̃(W, β̄) + 2Radn(GW,β) +B √ log 1δ 2n ,
(9)
≤ R̃n(W,β) + m
nλ Rn(β) +
2mτ
λ
− R̃(W, β̄) + 2Radn(GW,β) +B √ log 1δ 2n ,
(10)
where Eq. (9) holds by the definition of (W̃ , β̃) (minimizer) and Eq. (10) is because Pm(·) is upper bounded by 2n2.
Invoking Lemma 2 on R̃n(W,β) − R̃(W, β̄) in Eq. (10) with a union bound, with probability greather than 1− 2δ, we obtain
R̃(W̃ , β̃)− R̃(W, β̄) ≤ 2mτ λ + m nλ Rn(β) + 4Radn(GW,β) + 2B √ log 1δ 2n . (11)
For the term Rn(β), we have
m nλ Rn(β) = m nλ 1 n n∑ i=1 ℓ(fβ(xi), yi) ≤ m nλ Bn n = Bm nλ , (12)
where B is the uniform upper-bound on the loss function.
Combining Eq. (11) and Eq. (12), the following holds with probability at least 1− 2δ,
R̃(W̃ , β̃)− R̃(W, β̄) ≤ Bm nλ + 4Radn(GW,β) + 2B √ log 2δ n + 2mτ λ .
A.2 FORMAL STATEMENTS OF THEOREM 2
We consider linear classifiers of the form fβ(x) = β⊤x for β ∈ Rd, with ∥β∥1 ≤ W1. We assume ∥xi∥∞ ≤ 1 for i = 1, . . . , n. For the loss function, following (Boucheron et al., 2005), we take ℓ(fβ(X), Y ) = ϕ(−fβ(X)Y ) where ϕ : R → R+. Some common examples are ϕ(x) = log(1+ex) for logistic regression and ϕ(x) = max(0, x) for support vector machine.
Assumption 1. ϕ(·) is uniformly upper bounded by B. Assumption 2. ϕ(·) is Lipschitz with constant L i.e., ∥ϕ(x)− ϕ(y)∥ ≤ L∥x− y∥. Assumption 3. t(Z) = gW (Z)ϕ(Z) is Lipschitz with constant L1 i.e., ∥t(x)− t(y)∥ ≤ L1∥x− y∥. Theorem 3. (Linear Case) Suppose that the assumption 1, 2, and 3 hold. The followings hold with probability at least 1− 2δ:
1. Gap on the normal task:
R(β̃)−R(β̂) ≤ λB + 4LW1
√ log d
n + 2B √ log 2δ n ,
2. Gap on the backdoor task:
R̃(W̃ , β̃)− R̃(W, β̄) ≤ Bm nλ + 4L1W1
√ log d
n + 2B √ log 2δ n + 2mτ λ .
Corollary 1. Under the same assumptions of Theorem 3, by setting λ = Θ(1/ log n), m = Θ( √ n) and τ = Θ(1/n), two gaps will converge to zero with high probability as n → ∞.
Proof of Theorem 3 and Corollary 1. We will be using the following two lemmas.
Lemma 3. (Boucheron et al., 2005) For any β ∈ Rd, we have with probability at least 1− δ,
|Rn(β)−R(β)| ≤ 2LW1
√ log d
n +B √ log 1δ 2n .
Lemma 4. For any fixed W ∈ Rp, we have with probability at least (1− δ),
|R̃n(W,β)− R̃(W,β)| ≤ 2L1W1
√ 2 log d
n +B √ log 1δ 2n
for any β ∈ Rd.
The proof of Lemma 4 is attached at the end of proof of the main theorem.
The proof of Theorem 3 directly follows from Theorem 1 by using explicit forms of the Rademacher Complexity in Lemma 3 and Lemma 4.
Regrading the proof of Corollary 1, it is straightforward to check that every term in the two gaps of Theorem 3 will vanish as n → ∞ with specified hyperparameters.
A.3 PROOF OF LEMMA 4
Proof. We will use the following lemma.
Lemma 5 ((Boucheron et al., 2005)). Let F be the class of linear predictors, with the ℓ1-norm of the weights bounded by W1. Also assume that with probability one that ∥x∥∞ ≤ X∞. Then
Radn(F) ≤ X∞W1
√ 2 log d
n ,
where d is the dimension of data and n is the sample size.
Back to the main proof, recall by definition,
R̃n(W,β) = 1
n n∑ j=1 1{yj ̸= 1}gW (xj)ϕ(ỹfβ(xj)).
Since ϕ is uniformly upper bounder by B and gW is upper bounded by one, then by Lemma 2,
|R̃n(W,β)− R̃(W,β)| ≤ 2Radn(GW,β) +B √ log 1δ 2n ,
holds with probability at least 1 − δ where Radn(GW,β) = Eσ[supβ∈Rd 1n ∑n
j=1 σj1{yj ̸= 1}gW (xj)ϕ(fβ(xj))] with P (σi = 1) = P (σi = −1) = 0.5 for i = 1, . . . , n. Without loss of generality, we assume that that there are s samples with ground-truth label 1 with index n− s+ 1, . . . , n. Thus,
2Radn(GW,β) = 2Eσ[ sup β∈Rd
1
n n−s∑ j=1 σjgW (xj)ϕ(fβ(xj))] (13)
≤ 2L1 n Eσ[ sup β∈Rd n−s∑ j=1 σjfβ(xj)], (14)
≤ 2L1W1
√ 2 log d
n , (15)
where Eq. (14) holds by Lipschitz Composition Principle (Boucheron et al., 2005) and Eq. (15) is by Lemma 5.
B ALGORITHMS
In this section, we present algorithms for implementing the proposed DLP. We adopt the state-of-theart techniques in the MTL literature (Sener & Koltun, 2018; Kaiser et al., 2017; Zhou et al., 2017). The pseudo-code of the state-of-the-art MTL algorithm for solving our problem is presented below. The main idea of the algorithm is as follows. We first run gradient descent algorithms on each task in Lines 2-3. Then, to ensure that the overall objective value is decreased, we further update the shared parameter β in Lines 4-5. The rationale for obtaining particular α to ensure the decrease of the overall object value can be found in (Sener & Koltun, 2018). Regarding the convergence analysis, under reasonable conditions, the Procedure 1 is shown to find Pareto stationary points or local/global optimal points. We refer the interested readers to the references (Sener & Koltun, 2018; Kaiser et al., 2017) for details.
Algorithm 1 Input: Initialization: Model Parameter β1, Selection Parameter W 1, Hyperparameters λ, τ and stepsizes
{ηi}T−1i=1
1: for t = 1, . . . , T − 1 do 2: βt = βt − ηt∇βL1(βt) // L1(β) := 1/n ∑n i=1 ℓ(fβ(xi), yi)
3: W t+1 = W t − ηt∇WL2(W t, βt) // L2(W,β) := τ/nPm(W ) + λ/m ∑n
j=1 1{yj ̸= ỹ}gW (xj)ℓ(fβ(xj), ỹ)
4: Obtain αt1 and αt2 with Procedure 2 5: βt+1 = βt − η(αt1∇βL1(βt) + αt2∇βtL2(W,β)) 6: end for
Output: βT , WT
C COMPLEMENTARY EXPERIMENTAL RESULTS
We provide detailed experimental results that are omitted in the main text due to the page limit in this section.
Algorithm 2 Weight Solver Input: Initialization: α = ( α1, α2 ) = ( 1 2 , 1 2 ) , W and β
1: Compute M st. M1,2 = (∇βL1(β))⊤(∇βL2(W,β)), M2,1 = (∇βL2(W,β))⊤(∇βL1(β)) 2: Compute t̂ = argminr ∑ t α
tMrt 3: Compute γ̂ = argminγ ((1− γ)α+ γet̂)
⊤ M ((1− γ)α+ γet̂) 4: α∗ = (1− γ̂)α+ γ̂et̂
Output: α∗
C.1 SELECTED SAMPLES FOR CIFAR10
For task II, we demonstrate the categories of the top 1% selected backdoor training samples in Fig. 4a, and several images of the top-3-category selected training backdoor samples associated with the backdoor label ‘Cat’, ‘Deer’ and ‘Truck’ in Fig. 4b. Similar to the task I in the main text, the top-3-category backdoor samples are semantically consistent with their target labels. For example, the images of ‘Deer’ and ‘Dog’ resemble the images of ‘Horse’ most than any other categories in the Fashion-MNIST dataset. Such observations provide concrete evidence to support the conjecture that backdoor images should be similar to their backdoor label category (Bagdasaryan et al., 2020). The similarity is measured in terms of semantic meanings in our case.
C.2 TEST PERFORMANCES ON GTSRB
In this section, we test the proposed method on the GTSRTB dataset. There are in total 43 types of traffic signs (labels) in GTSRB, and many of them are of the same type, e.g., “20 speed”,“30 speed”. For the sake of clarity, we only select one of each types for testing.
C.3 COMPARISON WITH STATE-OF-THE-ART ON CIFAR10
For semantic backdoor attacks, following the framework in (Bagdasaryan et al., 2020), we relabel a whole category of the training data with the same semantic meaning. For example, we can relabel images of Top (with ground-truth label 1) with label 0. For edge-case attacks, we follow the ideas in (Wang et al., 2020) to first create a new training dataset by excluding the samples of one whole category. Then, we relabel the samples of the previously excluded sub-category and added them to the previous training data to form a new training data.
We follow the same setup for Task II in the main text. The two SOTA methods to be compared are listed below. For semantic attacks, the work of (Bagdasaryan et al., 2020) relabeled a whole category of the training data. For edge-case attacks, the authors in (Wang et al., 2020) first relabeled the images of Southwest Airplanes (NOT in CIFAR10) as Truck and then injected the relabeled sample-label pairs into the training data. For completeness, we also test for different backdoor labels as summarized in Table 6. The proposed DLP consistently outperforms the SOTA clean-sample attack for all backdoor target labels. Also, the proposed DLP is comparable with the more powerful edge-case attack in some cases, e.g., a backdoor label of ‘Dog’.
Method DLP Semantic Edge-case
C.4 EXPERIMENTAL RESULTS UNDER DEFENSE MECHANISMS
In this section, we test the proposed method under certain defenses. As mentioned in the main text, because of the weak threat model of (centralized) clean-sample backdoor attacks, many existing defenses against perturbation-based attacks are inappropriate for defending against our attacks.
But defenses against label flipping attacks, e.g., label sanitization (Paudice et al., 2018) and label ceritication (Rosenfeld et al., 2020) can be suitably tailored to defend against our method. The results are summarized in Fig. 6 and Fig. 7 respectively. It can be concluded from the figure that the proposed method can escape the two defenses. | 1. What is the focus and contribution of the paper regarding backdoor attacks?
2. What are the strengths and weaknesses of the proposed approach, particularly in its concealment and effectiveness?
3. Do you have any concerns or doubts regarding the assumptions made in the paper, such as attacker settings?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or issues that the reviewer has regarding the paper's claims, experiments, and comparisons with other works? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The author proposes a new backdoor attack-DLP, which only needs to modify the training set labels to threaten the model. And this backdoor attack method uses a data-driven backdoor scoring mechanism, which can take effect within any backdoor sample size. Finally, the effectiveness of the method is proved by theory and experiment.
Strengths And Weaknesses
Strength
Compared with the current mainstream backdoor attack methods, this method has lower requirements and stronger concealment. The theoretical derivation of the backdoor scoring mechanism is very detailed and the method is convincing.
Weaknesses
The dataset used in the experiment is too simple, and there is no experiment on the complex dataset. Compared with SOTA, the improvement is not obvious, and the experimental persuasion needs to be improved.
I doubt the assumption of the attacker settings, e.g., “it is difficult for attackers to access users’ input queries in the test phase”. Actually, the ML service is often public and the attacker could be the user too, such as the pre-trained models deployed on the cloud.
Another question is about the claim, “clean-sample backdoor attacks are more malicious than perturbation-based attacks since they do not modify features of input data.” From the empirical view, the clean-sample backdoor attacks always incur more performance loss at the clean test set while the perturbation-based method seldom hurt the performance of the model on other clean inputs. Maybe more evidence should be involved to support this claim.
Clarity, Quality, Novelty And Reproducibility
The method description is clear and the theoretical analysis is sufficient. In the experimental part, the validity analysis of the results is relatively redundant. The method comparison and defense result analysis can be elaborated more. |
ICLR | Title
DLP: Data-Driven Label-Poisoning Backdoor Attack
Abstract
Backdoor attacks, which aim to disrupt or paralyze classifiers on specific tasks, are becoming an emerging concern in several learning scenarios, e.g., Machine Learning as a Service (MLaaS). Various backdoor attacks have been introduced in the literature, including perturbation-based methods, which modify a subset of training data; and clean-sample methods, which relabel only a proportion of training samples. Indeed, clean-sample attacks can be particularly stealthy since they never require modifying the samples at the training and test stages. However, the state-of-the-art clean-sample attack of relabelling training data based on their semantic meanings could be ineffective and inefficient in test performances due to heuristic selections of semantic patterns. In this work, we introduce a new type of clean-sample backdoor attack, named as DLP backdoor attack, allowing attackers to backdoor effectively, as measured by test performances, for an arbitrary backdoor sample size. The critical component of DLP is a data-driven backdoor scoring mechanism embedding in a multi-task formulation, which enables attackers to simultaneously perform well on the normal learning tasks and the backdoor tasks. Systematic empirical evaluations show the superior performance of the proposed DLP to state-of-the-art clean-sample attacks.
1 INTRODUCTION
The backdoor attack has been an emerging concern in several deep learning applications owing to their broad applicability and potentially dire consequences (Li et al., 2020). In a high level, a backdoor attack implants triggers into a learning model to achieve two goals simultaneously: (1) to lead the backdoored model to behave maliciously on attacker-specified tasks with an active backdoor trigger, e.g., a camouflage patch as demonstrated in Fig. 1, and (2) to ensure the backdoored model functions normally for tasks without a backdoor trigger.
One popular framework is the perturbation-based backdoor attack (PBA) (Gu et al., 2017; Chen et al., 2017; Turner et al., 2019; Zhao et al., 2020; Doan et al., 2021a;b). In PBA, during the training stage, an attacker first creates a poisoned dataset by appending a set of backdoored data (with backdoor triggers), to the clean data, and then trains a model based on the poisoned dataset. In the test stage, the attacker launches backdoor attacks by adding the same backdoor trigger to the clean test data.
The requirement of accessing and modifying data, including both features and labels, during the training and test stages in PBA could be unrealistic under several applications. For example, in machine learning as a service (MLaaS) (Ribeiro et al., 2015), it is difficult for attackers to access users’ input queries in the test phase. Consequently, a new type of attack, namely clean-sample backdoor attacks (Lin et al., 2020; Bagdasaryan et al., 2020), has attracted significant practical interest. In clean-sample backdoor attacks, the attacker changes labels instead of features in the training stage only as illustrated in Fig. 1 and summarized in Table 1. The state-of-the-art (SOTA) clean-sample attack, known as the semantic backdoor attack (Bagdasaryan et al., 2020), first looks for images with particular semantic meaning, then relabels all the training images with the semantic meaning, e.g., green car in the CIFAR10 dataset, to attacker-specified labels. Finally, the attacker trains a classifier based on the modified data. In the inference stage, no further operations are needed the attacker.
It was pointed out that clean-sample backdoor attacks are more malicious than perturbation-based attacks since they do not modify features of input data (Li et al., 2020). Nevertheless, the SOTA clean-sample method, namely the semantic backdoor attack, is possibly limited in terms of backdoor
20
selection for the following reasons. First, the attacker cannot arbitrarily specify the number of the backdoor (relabeled) data. Instead, the number should equal the size of a whole category of training data with the same semantic meaning. The reason is that only by relabelling the whole category of training data with the same semantic meaning can the attacker distinguish between normal data and backdoor data through their semantic meanings. Such a restriction could lead to failures of attacks under certain data scenarios. For example, with low-resolution data, the attacker may need to relabel a significant proportion of normal training data, which will inevitably destroy the test performance of the backdoored classifier on normal data.
Second, the criterion for selecting semantic patterns are heuristic and may vary drastically among different attackers. As a result, test performances will change significantly. To find the best backdoor criterion, the attacker will try every semantic combination with brute force methods. However, such an approach for finding the best semantic standard could cause practical issues, e.g., computational infeasibility. For instance, there are roughly 4× 1067 ways to select 20 categories out of a total of 20000 categories in the ImageNet (Krizhevsky et al., 2012) for a semantic backdoor attack.
In light of the aforementioned issues, we propose a new type of clean-sample attack named DataDriven Label-Poisoning (DLP) backdoor attack. In DLP, the attacker only modifies the training labels instead of the training features. In contrast to heuristic selections of backdoor patterns in current semantic attacks, in DLP, the attacker selects backdoor samples via a scoring mechanism. Each training sample will be assigned a value by the scoring mechanism to reflect its backdoor effectiveness, measured by the test performances on both the normal and backdoor tasks. Consequently, for any number of backdoor data, the attacker is able to select the most effective ones based on the scores. To the best of the authors’ knowledge, we are the first to mathematically apply the ideas of scoring mechanisms to formulate a backdoor problem and offer corresponding theoretical justifications.
Our contributions in this work are summarized below. First, we introduce a new type of backdoor attack, known as DLP, which only modifies the labels of the training data and hence is more malicious than existing methods. In addition, the proposed DLP backdoor attack enables attackers to select backdoor samples at any level of the backdoor budget (namely the number of backdoor sample) effectively. Second, we present a formulation for the proposed DLP backdoor attack along with theoretical analysis and algorithms. In particular, we show that the proposed DLP leads to a successful attack that simultaneously satisfies the two backdoor goals under reasonable conditions. Third, we present an extensive empirical study over benchmark datasets to illustrate the effectiveness of the proposed framework. We find that experimental results align with existing conjectures and provide insights on efficiently designing backdoor attacks.
The rest of the paper is organized as follows. First, an overview of the related literature is given in Section 2. Then, we formally introduce the DLP in Section 3. Next, we present an implementation of
the proposed DLP and corresponding theoretical analysis in Section 4. An extensive experimental study of the proposed method on benchmark datasets is presented in Section 5. Finally, we conclude the paper in Section 6.
2 RELATED WORK
Data poisoning/Label flipping attacks Data poisoning attacks, including manipulate training features (Biggio et al., 2012; Koh & Liang, 2017; Jagielski et al., 2018; 2021; Weber et al., 2020) and flipping labels (Paudice et al., 2018), aim to achieve attackers’ adversarial goals. Classical data poisoning attacks are untargeted, which aim to deteriorate the overall prediction performances of the classifier (Gao et al., 2020). For example, early backdoor attacks contaminated training data to decrease the overall test accuracy of the trained supported vector machines (Biggio et al., 2012).
Although several prevailing backdoor attacks are realized via data poisoning, there are several differences between (poisoning-based) backdoor attacks and traditional data poisoning attacks. Backdoor attacks are targeted. A backdoored model only misbehaves maliciously on specified tasks in the presence of backdoor triggers while retaining the overall test accuracy of its primary tasks. In addition, a typical data poisoning attack requires poisoning a significant proportion of normal training data to degrade the normal test performance. In contrast, a backdoor attack often only allows modifications on a small subset of training data to maintain good performances on the normal tasks (Li et al., 2020).
Clean-sample backdoor attacks Unlike perturbation-based backdoor attacks (PBA) which modify both the features and labels during the training and the test stage, clean-sample backdoor attacks (Bagdasaryan et al., 2020) (SBA) only poison the labels in the training stage. On the one hand, SBA do not require accessing and changing the features in both training and test stages and therefore are practically stealthier than PBA. On the other hand, SBA only poison the labels of clean data and hence are less powerful than PBA. For example, clean-sample backdoor attacks typically can not simultaneously achieve perfect accuracy on both clean and backdoored test data.
Multi-task learning Multi-task learning (MTL) (Baxter, 2000; Li et al., 2014; Xue et al., 2007; Ruder, 2017) refers to learning several different tasks via a single model. MTL is often implemented in hard and soft parameter sharing. For soft-parameter sharing, all the parameters are private and specific to different tasks. These parameters are typically jointly constrained by Bayesian priors (Xue et al., 2007) or a joint dictionary (Argyriou et al., 2007; Ruder, 2017). In hard-parameter sharing, each task shares some parameters and has task-specific parameters. Our work falls under the umbrella of hard-parameter sharing. Both the normal and backdoor tasks will share the parameter of the learning model, but the backdoor task owns the selection parameter privately. The proposed DLP can be considered a particular type of MTL because there are two goals that are learned jointly with hard parameter sharing.
3 DLP
We consider a classification scenario, where x ∈ X ⊆ Rd, y ∈ Y = [1, . . . ,K], ỹ ∈ Y , denote the features, label, and backdoor target label, respectively. We denote {(xi, yi)}ni=1 as i.i.d training data from a distribution PX,Y . Let fβ : X → Y denote the classifier parameterized by β ∈ Rp. We let ℓ(·, ·) denote the loss function that evaluates the discrepancy between the true label and predicted label. We use ∥ · ∥1 , ∥ · ∥2 and ∥ · ∥∞ to denote the ℓ1, ℓ2 and ℓ∞ vector norm respectively.
3.1 THREAT MODEL
Attacker’s Capacities We consider a clean-sample backdoor attack scenario where an attacker can only access and modify the training labels, but not the training features. We emphasize that the number of training data will remain the same after poisoning.1 Besides, the attacker has control over the training process. However, the attacker does not have control over any procedures in the test stage, including modifying data and deploying model on cloud. The discussed scenario can happen in several real-world applications, e.g., “Outsourcing Training” (Gao et al., 2020). Overall speaking, the threat model of our method poses considerably weaker requirements than the perturbation-based backdoor attacks, and therefore it is stealthier than the current perturbation-based backdoor attacks.
3.2 PROPOSED METHOD
We first briefly outline the overall process of clean-sample backdoor attacks in the followings. In the training stage, attackers first relabel a subset of clean data (denoted as DB) to attacker-specified target label(s), e.g., relabeling images of ‘green cars’ as ‘frogs’. Next, attackers train a classifier to learn those poisoned data (DB) and the remaining clean data. During the test stage, attackers do not need to modify the test data to launch backdoor attacks, unlike perturbation-based attacks where the attacker needs to patch the test data to launch backdoor attack. The reason is that any test input with the same features as DB, e.g., images of ‘green cars’, is expected to be automatically predicted as the attacker-specified target label(s) by the backdoored model.
The most challenging part in clean-sample attacks is to effectively and efficiently select the features in DB to serve as backdoor triggers. Current methods apply the semantic pattern of green cars as backdoor triggers (Bagdasaryan et al., 2020). But the use of semantic patterns as backdoor triggers could be possibly limited in terms of effectiveness and efficiency, such as a restricted number of backdoor data and ineffective selections of backdoor triggers (see Section 1 for detailed discussions). To solve those issues, we propose a data-driven backdoor selection mechanism that enables attackers to select any number of backdoor data effectively. To be precise, the attacker chooses backdoor data via a scoring map that takes features as input and outputs a score
gW : X → [0, 1],
with W ∈ Rq being the parameter. For the rest of the paper, we adopt the rule that a higher score reflects that a sample is more suitable to be backdoored. Intuitively, one can treat the selection mechanism gW as a soft binary classifier for deciding whether an input should be selected as a backdoor candidate. To select backdoor data at any attacker-specified level, the attacker can first sort the scores of all the samples, namely {gW (xi)}ni=1, in descending order, and then pick any attacker-specified quantile of data as backdoor samples.
Next, we will elaborate on how to incorporate the proposed scoring method into the training pipeline. Recall that one of attackers’ goals is to obtain a high accuracy on backdoor data. To fulfill this goal, the attacker should select a backdoor scoring mechanism gW such that the following empirical backdoor risk is minimized for given data {(xi, yi)}ni=1, model β, and a backdoor label ỹ ∈ Y:
R̃n(W,β) := 1
n n∑ i=1 1{yi ̸= ỹ}gW (xi)ℓ(fβ(xi), ỹ), (1)
subject to (C) ∑n
j=1 1{yj ̸= ỹ}gW (xj) = m and gW (xj) ∈ {0, 1} for j = 1, . . . , n. Note that indicators of 1{yi ̸= ỹ} for i = 1, . . . , n are included in the constraint. Applying these terms is because we can not backdoor (relabel) a sample to its originally ground-truth label class. For example, it is meaningless to relabel an image of the cat as a cat for a backdoor attack.
As it is generally accepted in the backdoor literature, attackers should simultaneously achieve high clean accuracy and backdoor accuracy. To meet such a requirement, we use a multi-task learning formulation with weighted summation to design backdoor attacks. The first task is associated with the normal learning task, and the second task corresponds to the backdoor task. Given normal training data {(xi, yi)}ni=1 and a backdoor target label ỹ, the DLP backdoor attack is a pair of minimizer
1This is fundamentally different from the case of perturbation-based attacks where backdoor training data will be injected to the clean training data and therefore the number of total training data will increase.
(W̃ , β̃) of the following:
min W∈Rq,β∈Rp
1
n n∑ i=1 ℓ(fβ(xi), yi) + λ m n∑ j=1 1{yj ̸= ỹ}gW (xj)ℓ(fβ(xj), ỹ) (2)
subject to (C) n∑
j=1
1{yj ̸= ỹ}gW (xj) = m and gW (xj) ∈ {0, 1} for j = 1, . . . , n,
where m is the number of backdoor samples, and λ > 0 is a regularizing coefficient. Remark 1. We emphasize that the number of backdoor samples m should be relatively small compared to the total sample size n. The reason is that an excessive number of backdoor samples will eventually lead to a trivial classifier for normal test data. Thus, the first goal of backdoor attacks will not be met.
Finally yet importantly, we will now discuss the test pipelines of DLP. Given an attacker-specified number of backdoor data m and a trained score mechanism gW , we first sort the scores of training data {gW (xi)}ni=1 in descending order and set gW (xi)(m), namely the m-th largest score, to be a threshold for selecting test backdoor data. For a test input x, if its score gW (x) is greater than gW (xi)(m), then it will be marked as a backdoor sample.
4 CONTINUOUS IMPLEMENTATION AND THEORETICAL RESULTS
4.1 CONTINUOUS IMPLEMENTATION
Directly solving problem (2) is challenging. The main difficulty comes from the fact that {gW (xi)}ni=1 are required to take values in a discrete set, which is not directly compatible with the popular gradientbased optimization framework. To tackle this issue, we first consider an unconstrained version of problem (2) and then propose a regularization term to enforce the selection mechanisms to fulfill the constraint (C). In particular, we consider
Pm(W ) = ( n∑ j=1 1{yj ̸= ỹ}gW (xj)−m)2 + n∑ j=1 gW (xj)(1− gW (xj)),
where m is an attacker-specified number. It is straightforward to observe that the proposed term Pm(W ) equals zero if and only if the constraint (C) is satisfied. In other words, the proposed regularization term will lead a selection mechanism to satisfy the constraint (C) if and only if it is precisely minimized. Consequently, we now propose to consider the following continuous and unconstrained minimization problem:
min W,β
1
n n∑ i=1 ℓ(fβ(xi), yi) + λ m n∑ j=1 1{yj ̸= ỹ}gW (xj)ℓ(fβ(xj), ỹ) + τ n Pm(W ), (3)
where λ, τ > 0 are tuning parameters. One can apply classical gradient-based methods to solve the above problem. Nevertheless, classical optimization methods may lead to convergence issues. We include the rigorous descriptions of the issues and algorithms to solve them in the appendix.
4.2 THEORETICAL RESULTS
In this section, we show that, under reasonable conditions, the proposed DLP leads to an attack that satisfies the two backdoor goals mentioned in last section. All the proof is included in the supplement due to the page limit. We build theoretical results based on the classical notion of the Rademacher Complexity in classical learning theory. The (empirical) Rademacher complexity of the function class F with respect to a probability distribution P over an input space X for i.i.d. sample {xi}ni=1 with size n is: Radn(F) := n−1Eσ [ supf∈F ∑n i=1 σif (xi) ] , where the inner expectation is taken over σ = {σ1, σ2, · · · , σn} and they are independent random variables following the Rademarcher distribution, i.e., P (σi = 1) = P (σi = −1) = 1/2. Next, we present results on the generalization bounds for both normal and backdoor tasks in the followings. We consider a binary classification problem with label set Y := {1,−1} and the backdoor target label ỹ = 1. The test performances are evaluated through R̃(β,W ) := EPX,Y 1{Y ̸=
ỹ}gW (X)ℓ(fβ(X), ỹ) and R(β) = EPX,Y ℓ(fβ(X), Y ) respectively. Denote (W̃ , β̃) to be a backdoor attack obtained by solving problem (3). Also, for ease of notation, we define two new family of functions namely Hβ = {ℓ(fβ(X), Y )|β ∈ Rd} and GW,β = {1{Y ̸= ỹ}gW (X)ℓ(fβ(X), Y )|β ∈ Rd,W ∈ Rp}. Additionally, we will need the following assumption regrading the loss function. Assumption 1. The loss function ℓ(·, ·) is uniformly upper bounded by B. Theorem 1. Suppose that the assumption 1 holds. The followings hold with probability at least 1− 2δ:
• Gap on the normal task: R(β̃)−R(β̂) ≤ λB + 4Radn(Hβ) + 2n−1/2B √ log 1/δ, where
β̂ := argminβ n −1 ∑n i=1 ℓ(fβ(xi), yi) is the normal classifier;
• Gap on the backdoor task: R̃(W̃ , β̃)−R̃(W, β̄) ≤ 4Radn(GW,β)+2n−1/2B(log δ−1)0.5+ 2mτ/λ+Bm/nλ, where (W, β̄) is the optimal backdoor choice, i.e., the minimizier of (1).
The first bound is about the gap in normal test performances between the DLP classifier and the normal classifier. The second bound describes the differences between backdoor test performances of the DLP and the optimal backdoor choice (defined in Section 3.1). The task-priority hyperparameter λ appears in both two loss terms. A small λ tends to generate a backdoor classifier fβ̃ that resemble the normal classifier fβ̂ more. In contrast, an excessively large λ pushes the backdoor classifier fβ̃ to behave similarly to the optimal backdoor classifier fβ .
Two upper bounds also depend on the Rademacher complexity of function classes. If the Rademacher complexity of the function class is bounded, we can obtain vanishing upper bounds by appropriately selecting the hyperparameters. We provide an informal result for linear classifiers in the following. The formal statements and the proof are included in the supplement. Theorem 2. (Informal) Suppose that fβ is a linear classifier. Then, under specific assumptions and appropriately chosen hyperparameters, the gap on normal and backdoor tasks converge to zero with high probability as sample size n → ∞.
5 EXPERIMENTAL STUDY
This section systemically evaluates the proposed method on synthetic data and benchmark datasets. Our empirical study shows the effectiveness of the proposed DLP and improved performances compared with state-of-the-art (SOTA) methods. Interestingly, we demonstrate that the backdoor samples selected by the DLP are semantically close to those from the target labels, which is aligned with existing conjecture (Bagdasaryan et al., 2020). Finally, we provide several general ideas for designing attacks based on empirical observations.
We will use two test measurements throughout this section. The normal accuracy (abbreviated as AccN) is the test accuracy on the normal data, and the backdoor accuracy (abbreviated as AccB) represents the test accuracy on the selected backdoor data. The method for selecting backdoor data is described in the last paragraph in Section 3.
5.1 SYNTHETIC DATA
We evaluate the proposed method on synthetic data. For the training data, we generate 500 i.i.d samples from the normal distribution N ([0, 3], [[2, 0], [0, 2]]) with label 0 and 500 i.i.d samples from the normal distribution N ([0,−3], [[2, 0], [0, 2]]) with label 1 respectively. For the test data, we generate 2500 i.i.d samples for each label class. The classifier fβ is linear, and the loss function ℓ is the logistic loss. Also, the backdoor target ỹ is 1, and the number of backdoor sample m = 50.
Test Performance We summarize test performances under different λ in Table. 2. By setting λ to 0, the objective is to minimize the normal empirical risk only, and hence the resulted classifier is the normal classifier. Similarly, by setting λ to be sufficiently large, e.g., ∞, the resulted classifier is the optimal backdoored classifier. For a wide range of λ, the proposed DLP delivers comparably high accuracy on both normal
and backdoor data compared with the normal and backdoored classifier, respectively.
Selected backdoor sample We plot the normal training data (in orange dots and blue triangles), the selected training backdoor data (in green triangles), the normal classifier (in solid red line), and the backdoor classifier (in gray dash-dot line) with λ = 1 in Fig. 2. The selected backdoor points stay close to the normal classifier, and the DLP classifier moves down compared to the normal classifier. Such an observation can be interpreted in the following way. By relabelling those points near the decision boundary, the attacker slightly shifts the normal classifier to account for those backdoored samples without severely degrading the normal test accuracy. This phenomenon has also been independently discovered and investigated in the early work of poisoning support vector machines (Biggio et al., 2012) and the active learning literature (Settles, 2009), where the goal is to find the most effective and efficient samples to train a classifier.
5.2 REAL-WORLD DATASETS
Tasks We consider the following tasks: (I) 10-class classification problem on the Fashion-MNIST (Xiao et al., 2017) with a LeNet (LeCun et al., 2015), (II) 10-class classification problem on the CIFAR10 (Krizhevsky et al., 2009) data with a Resnet18 (He et al., 2016), and (III) classification problem on GTSRB (Stallkamp et al., 2011). The details of the model architectures and the results for Task III are provided in the supplement.
Backdoor Setup For task I (II), we set the number of backdoor samples m to be 600 (500), and 6000 (5000), which accounts for 1% and 10% of the total training data respectively. For both tasks, the selection mechanism is a two-layer neural network. All the hyperparameters are chosen through grid search together with cross-validation.
Single Task Performance For task I, the AccN of the normal classifier is 92.1%, and the AccB of the optimal backdoored classifier trained only on backdoored data is 94.2%. For task II, the AccN of the normal classifier is 92.1%, and the AccB of the optimal backdoored classifier 93.0%.
5.2.1 TEST PERFORMANCE
For every backdoor target label, we summarize the test performances of DLP in Table. 3 for Task I with m = 600, and for Task II with m = 500. We include the results for other choices of m in the next subsection. Both the normal and backdoor test accuracy of our DLP are comparably high compared to that of the single-task performance, which reflects the effectiveness of the proposed DLP. In task I (Fashion-MNIST), for backdoor target label ‘Sandal’, ‘Sneaker,’ and ‘Ankle Boot’ (referred as ‘Boot’), both the AccN and BccN exceed the test performances of other backdoor labels by a great margin. This observation implies that labels of the shoe category are the most effective for backdoor. Such a result may be because images of ‘Sandal’, ‘Sneaker,’ and ‘Boot’ are semantically similar.
5.2.2 SELECTED BACKDOOR SAMPLES
For task I, we demonstrate the categories of the top 1% selected backdoor training samples in Fig. 3a, and several images of the top-3-category selected training backdoor samples associated with backdoor label ‘Sneaker’ , ‘Trouser’ and ‘Shirt’ in Fig. 3b.
Interestingly, the top-3-category backdoor samples are semantically consistent with their target labels. For example, the images of ‘Sandal’ and ‘Boot’ resemble the images of Sneakers most than any other categories in the Fashion-MNIST dataset. Additionally, images from ‘Dress’ and ‘T-Shirt’ look like those of ‘Shirt’. Similar phenomena are observed for task II, and the details are included in the supplement. Such observations provide concrete evidence to support the conjecture that backdoor images should be similar to their backdoor label category (Bagdasaryan et al., 2020). The similarity is measured in terms of semantic meanings in our case. Based on the discussion, we suggest that, for clean-sample attacks, one should look select ‘similar’ images for backdoors.
5.2.3 EFFECTIVENESS UNDER VARYING BACKDOOR SIZES
We demonstrate the merit of our proposed method on effectively selecting any number of backdoor sample in this section. In particular, we test on several different ratios of the backdoor sample size and summarize the results in Table 4. From the threat model in Section 3.1, the total sample size remains the same after poisoning. Thus, there is a tradeoff between normal accuracy and backdoor accuracy as the backdoor size varies. For example, if the attacker flips 99% of the total training data, then the backdoored classifier will delivery trivial test accuracy on clean test data. Regarding to the empirical results, we observed that the proposed method deliveries both high benign and backdoor accuracy under a reasonable range choices of m, e.g., from 1% to 25%.
5.2.4 COMPARISON WITH STATE-OF-THE-ART METHODS & DEFENSES
We first compare the proposed DLP with two state-of-the-art methods, and then test the performances of DLP under certain defenses. As mentioned in Section 3, the threat model of our/clean-sample attack is more weaker than the perturbation-based attacks. As a result, the most suitable method for comparison is the SOTA clean-sample attack (“Semantic attack”). For the sake of completeness, we also compare our method with the more powerful PBA (“edge-case attack”). Empirically, we find that the DLP outperforms the SOTA clean-sample attack and is comparable with the more powerful edge-case attack in some cases.
Task I Due to the low-resolution property of the Fashion-MNIST dataset, one may not obtain finegrained categories of images other than the original ten classes. Hence, we will compare our proposed method with two SOTA methods on a binary classification task. The new task is to predict if a fashion object is a piece of clothing with label 0 (including ‘Top’, ‘Trouser’, ‘Pullover’, ‘Dress’, ‘Coat’, and ‘Shirt’) or an accessory with label 1 (including ‘Sandal’, ‘Sneaker’, ‘Bag’, ‘ Boot’). We selected a LetNet classifier whose normal accuracy is 97.2%.
We include the backdoor data preparation process for both “edge-case attack” and “semantic attack” in the appendix. The low-resolution property of the Fashion-MNIST dataset leads the above two SOTA methods to coincide with each other. In the following, we set the backdoor target label to be 0. Then, we tested all the possible categories for backdoor samples and summarized the result in Table. 5. The AccN and AccB of the proposed DLP are 94.1% and 94.7%, respectively. We observe that, for a given AccN of 93%, the proposed DLP obtains the highest AccB against the SOTA methods. Alternatively, for a given AccB of 92%, the AccN of the proposed DLP is slightly lower than the SOTA methods (94.5%). The result is because edge-case attacks modify the training data, which should be more potent than our proposed methods. We follow the same setup above to conduct experiments on Task II and Task III. We observe similar results as in the Fashion-MNIST, and include the details in the supplement.
Defenses Because of the weak threat model of our method, many existing defenses against perturbation-based attacks are inappropriate for defending against our attacks. But defenses against label flipping attacks, e.g., label sanitization (Paudice et al., 2018) can be suitably tailored to defend against our method. We found that those defenses are ineffective against our methods. Details are included in the supplementary material.
6 CONCLUSION
In this work, we proposed a new type of clean-sample backdoor attack known as DLP. The proposed DLP allows the attacker to choose any number of backdoor samples effectively in terms of test performance. The key ingredient of developing the DLP is a multi-task learning formulation, enabling the attacker to perform well on both normal and backdoor tasks. There are several interesting future problems. One direction is to characterize the optimal backdoor sample theoretically. Finally, on the defense side, it is necessary to create a new defense mechanism to defend the DLP. The supplementary material contains proofs and more experimental studies.
Appendix for DLP: Data-Driven Label-Poisoning Backdoor Attack
We include the proof of Theorem 1 and the formal statement of Theorem 2 (and its proof) in Section A. The pseudo-code of algorithms for implementing DLP and the convergence analysis are presented in Section B. Complementary results of the experimental study are included in Section C. Finally, We also present additional experimental results, including investigations on the effect of different sizes of the backdoor sample and results on more complex datasets.
A PROOF
A.1 PROOF OF THEOREM 1
We will rely on the following two lemmas. Lemma 1. (Boucheron et al., 2005) For any β ∈ Rd, we have with probability at least 1− δ,
|Rn(β)−R(β)| ≤ 2Radn(Hβ) +B √ log 1δ 2n .
Lemma 2. (Boucheron et al., 2005) For any fixed W ∈ Rp, we have with probability at least 1− δ,
|R̃n(W,β)− R̃(W,β)| ≤ 2Radn(GW,β) +B √ log 1δ 2n
for any β ∈ Rd.
Back to our main results, we first bound the gap on the normal task.
Invoking Lemma 1, with probability at least 1− δ, we have
R(β̃)−R(β̂) ≤ Rn(β̃)−R(β̂) + 2Radn(Hβ) +B √ log 1δ 2n ,
= Rn(β̃) + nλ
m R̃n(W̃ , β̃) +
τ n Pm(W̃ )− nλ m R̃n(W̃ , β̃)
− τ n Pm(W̃ )−R(β̂) + 2Radn(Hβ) +B √ log 1δ 2n ,
≤ Rn(β̂) + nλ
m R̃n(W, β̂) +
τ n Pm(W )− nλ m R̃n(W̃ , β̃)
− τ n Pm(W̃ )−R(β̂) + 2Radn(Hβ) +B √ log 1δ 2n ,
(4)
≤ Rn(β̂)−R(β̂) + nλ
m R̃n(W, β̂) + 2Radn(Hβ) +B √ log 1δ 2n , (5)
where Eq. (4) holds by definitions of (W̃ , β̃) (minimizer) and Eq. (5) holds by definitions of W .
Invoking Lemma 1 on Rn(β̂)−R(β̂) in Eq. (5) with a union bound, with probability greather than 1− 2δ, we have
R(β̃)−R(β̂) ≤ nλ m R̃n(W, β̂) + 4Radn(Hβ) + 2 √ log 1δ 2n . (6)
From the definition of W , there are only m non-negative terms in R̃n(W, β̂) and hence
nλ m R̃n(W, β̂) = nλ m 1 n n∑ i=1 1{yi ̸= ỹ}gW (xi)ℓ(fβ̂(xi), yi) ≤ nλ m m n B = λB, (7)
where B is the uniform upper-bound on the loss function.
Combining Eq. (7) and Eq. (6), the following holds with probability at least 1− 2δ,
R(β̃)−R(β̂) ≤ λB + 4Radn(Hβ) + 2B √ log 2δ n .
Regarding the gap on the backdoor task, we first rewrite
R̃(W̃ , β̃)− R̃(W, β̄) = R̃(W̃ , β̃) + m nλ Rn(β̃) + mτ n2λ Pm(W̃ )
− mτ n2λ Pm(W̃ )− m nλ Rn(β̃)− R̃(W, β̄).
(8)
Invoking Lemma 2, with probability at least 1− δ, the followings hold
R̃(W̃ , β̃)− R̃(W, β̄) ≤ R̃n(W̃ , β̃) + m
nλ Rn(β̃) +
mτ n2λ Pm(W̃ )− mτ n2λ Pm(W̃ )− m nλ Rn(β̃)
− R̃(W, β̄) + 2Radn(GW,β) +B √ log 1δ 2n ,
≤ R̃n(W,β) + m
nλ Rn(β) + |
mτ n2λ Pm(W )− mτ n2λ Pm(W̃ )|
− m nλ Rn(β̃)− R̃(W, β̄) + 2Radn(GW,β) +B √ log 1δ 2n ,
(9)
≤ R̃n(W,β) + m
nλ Rn(β) +
2mτ
λ
− R̃(W, β̄) + 2Radn(GW,β) +B √ log 1δ 2n ,
(10)
where Eq. (9) holds by the definition of (W̃ , β̃) (minimizer) and Eq. (10) is because Pm(·) is upper bounded by 2n2.
Invoking Lemma 2 on R̃n(W,β) − R̃(W, β̄) in Eq. (10) with a union bound, with probability greather than 1− 2δ, we obtain
R̃(W̃ , β̃)− R̃(W, β̄) ≤ 2mτ λ + m nλ Rn(β) + 4Radn(GW,β) + 2B √ log 1δ 2n . (11)
For the term Rn(β), we have
m nλ Rn(β) = m nλ 1 n n∑ i=1 ℓ(fβ(xi), yi) ≤ m nλ Bn n = Bm nλ , (12)
where B is the uniform upper-bound on the loss function.
Combining Eq. (11) and Eq. (12), the following holds with probability at least 1− 2δ,
R̃(W̃ , β̃)− R̃(W, β̄) ≤ Bm nλ + 4Radn(GW,β) + 2B √ log 2δ n + 2mτ λ .
A.2 FORMAL STATEMENTS OF THEOREM 2
We consider linear classifiers of the form fβ(x) = β⊤x for β ∈ Rd, with ∥β∥1 ≤ W1. We assume ∥xi∥∞ ≤ 1 for i = 1, . . . , n. For the loss function, following (Boucheron et al., 2005), we take ℓ(fβ(X), Y ) = ϕ(−fβ(X)Y ) where ϕ : R → R+. Some common examples are ϕ(x) = log(1+ex) for logistic regression and ϕ(x) = max(0, x) for support vector machine.
Assumption 1. ϕ(·) is uniformly upper bounded by B. Assumption 2. ϕ(·) is Lipschitz with constant L i.e., ∥ϕ(x)− ϕ(y)∥ ≤ L∥x− y∥. Assumption 3. t(Z) = gW (Z)ϕ(Z) is Lipschitz with constant L1 i.e., ∥t(x)− t(y)∥ ≤ L1∥x− y∥. Theorem 3. (Linear Case) Suppose that the assumption 1, 2, and 3 hold. The followings hold with probability at least 1− 2δ:
1. Gap on the normal task:
R(β̃)−R(β̂) ≤ λB + 4LW1
√ log d
n + 2B √ log 2δ n ,
2. Gap on the backdoor task:
R̃(W̃ , β̃)− R̃(W, β̄) ≤ Bm nλ + 4L1W1
√ log d
n + 2B √ log 2δ n + 2mτ λ .
Corollary 1. Under the same assumptions of Theorem 3, by setting λ = Θ(1/ log n), m = Θ( √ n) and τ = Θ(1/n), two gaps will converge to zero with high probability as n → ∞.
Proof of Theorem 3 and Corollary 1. We will be using the following two lemmas.
Lemma 3. (Boucheron et al., 2005) For any β ∈ Rd, we have with probability at least 1− δ,
|Rn(β)−R(β)| ≤ 2LW1
√ log d
n +B √ log 1δ 2n .
Lemma 4. For any fixed W ∈ Rp, we have with probability at least (1− δ),
|R̃n(W,β)− R̃(W,β)| ≤ 2L1W1
√ 2 log d
n +B √ log 1δ 2n
for any β ∈ Rd.
The proof of Lemma 4 is attached at the end of proof of the main theorem.
The proof of Theorem 3 directly follows from Theorem 1 by using explicit forms of the Rademacher Complexity in Lemma 3 and Lemma 4.
Regrading the proof of Corollary 1, it is straightforward to check that every term in the two gaps of Theorem 3 will vanish as n → ∞ with specified hyperparameters.
A.3 PROOF OF LEMMA 4
Proof. We will use the following lemma.
Lemma 5 ((Boucheron et al., 2005)). Let F be the class of linear predictors, with the ℓ1-norm of the weights bounded by W1. Also assume that with probability one that ∥x∥∞ ≤ X∞. Then
Radn(F) ≤ X∞W1
√ 2 log d
n ,
where d is the dimension of data and n is the sample size.
Back to the main proof, recall by definition,
R̃n(W,β) = 1
n n∑ j=1 1{yj ̸= 1}gW (xj)ϕ(ỹfβ(xj)).
Since ϕ is uniformly upper bounder by B and gW is upper bounded by one, then by Lemma 2,
|R̃n(W,β)− R̃(W,β)| ≤ 2Radn(GW,β) +B √ log 1δ 2n ,
holds with probability at least 1 − δ where Radn(GW,β) = Eσ[supβ∈Rd 1n ∑n
j=1 σj1{yj ̸= 1}gW (xj)ϕ(fβ(xj))] with P (σi = 1) = P (σi = −1) = 0.5 for i = 1, . . . , n. Without loss of generality, we assume that that there are s samples with ground-truth label 1 with index n− s+ 1, . . . , n. Thus,
2Radn(GW,β) = 2Eσ[ sup β∈Rd
1
n n−s∑ j=1 σjgW (xj)ϕ(fβ(xj))] (13)
≤ 2L1 n Eσ[ sup β∈Rd n−s∑ j=1 σjfβ(xj)], (14)
≤ 2L1W1
√ 2 log d
n , (15)
where Eq. (14) holds by Lipschitz Composition Principle (Boucheron et al., 2005) and Eq. (15) is by Lemma 5.
B ALGORITHMS
In this section, we present algorithms for implementing the proposed DLP. We adopt the state-of-theart techniques in the MTL literature (Sener & Koltun, 2018; Kaiser et al., 2017; Zhou et al., 2017). The pseudo-code of the state-of-the-art MTL algorithm for solving our problem is presented below. The main idea of the algorithm is as follows. We first run gradient descent algorithms on each task in Lines 2-3. Then, to ensure that the overall objective value is decreased, we further update the shared parameter β in Lines 4-5. The rationale for obtaining particular α to ensure the decrease of the overall object value can be found in (Sener & Koltun, 2018). Regarding the convergence analysis, under reasonable conditions, the Procedure 1 is shown to find Pareto stationary points or local/global optimal points. We refer the interested readers to the references (Sener & Koltun, 2018; Kaiser et al., 2017) for details.
Algorithm 1 Input: Initialization: Model Parameter β1, Selection Parameter W 1, Hyperparameters λ, τ and stepsizes
{ηi}T−1i=1
1: for t = 1, . . . , T − 1 do 2: βt = βt − ηt∇βL1(βt) // L1(β) := 1/n ∑n i=1 ℓ(fβ(xi), yi)
3: W t+1 = W t − ηt∇WL2(W t, βt) // L2(W,β) := τ/nPm(W ) + λ/m ∑n
j=1 1{yj ̸= ỹ}gW (xj)ℓ(fβ(xj), ỹ)
4: Obtain αt1 and αt2 with Procedure 2 5: βt+1 = βt − η(αt1∇βL1(βt) + αt2∇βtL2(W,β)) 6: end for
Output: βT , WT
C COMPLEMENTARY EXPERIMENTAL RESULTS
We provide detailed experimental results that are omitted in the main text due to the page limit in this section.
Algorithm 2 Weight Solver Input: Initialization: α = ( α1, α2 ) = ( 1 2 , 1 2 ) , W and β
1: Compute M st. M1,2 = (∇βL1(β))⊤(∇βL2(W,β)), M2,1 = (∇βL2(W,β))⊤(∇βL1(β)) 2: Compute t̂ = argminr ∑ t α
tMrt 3: Compute γ̂ = argminγ ((1− γ)α+ γet̂)
⊤ M ((1− γ)α+ γet̂) 4: α∗ = (1− γ̂)α+ γ̂et̂
Output: α∗
C.1 SELECTED SAMPLES FOR CIFAR10
For task II, we demonstrate the categories of the top 1% selected backdoor training samples in Fig. 4a, and several images of the top-3-category selected training backdoor samples associated with the backdoor label ‘Cat’, ‘Deer’ and ‘Truck’ in Fig. 4b. Similar to the task I in the main text, the top-3-category backdoor samples are semantically consistent with their target labels. For example, the images of ‘Deer’ and ‘Dog’ resemble the images of ‘Horse’ most than any other categories in the Fashion-MNIST dataset. Such observations provide concrete evidence to support the conjecture that backdoor images should be similar to their backdoor label category (Bagdasaryan et al., 2020). The similarity is measured in terms of semantic meanings in our case.
C.2 TEST PERFORMANCES ON GTSRB
In this section, we test the proposed method on the GTSRTB dataset. There are in total 43 types of traffic signs (labels) in GTSRB, and many of them are of the same type, e.g., “20 speed”,“30 speed”. For the sake of clarity, we only select one of each types for testing.
C.3 COMPARISON WITH STATE-OF-THE-ART ON CIFAR10
For semantic backdoor attacks, following the framework in (Bagdasaryan et al., 2020), we relabel a whole category of the training data with the same semantic meaning. For example, we can relabel images of Top (with ground-truth label 1) with label 0. For edge-case attacks, we follow the ideas in (Wang et al., 2020) to first create a new training dataset by excluding the samples of one whole category. Then, we relabel the samples of the previously excluded sub-category and added them to the previous training data to form a new training data.
We follow the same setup for Task II in the main text. The two SOTA methods to be compared are listed below. For semantic attacks, the work of (Bagdasaryan et al., 2020) relabeled a whole category of the training data. For edge-case attacks, the authors in (Wang et al., 2020) first relabeled the images of Southwest Airplanes (NOT in CIFAR10) as Truck and then injected the relabeled sample-label pairs into the training data. For completeness, we also test for different backdoor labels as summarized in Table 6. The proposed DLP consistently outperforms the SOTA clean-sample attack for all backdoor target labels. Also, the proposed DLP is comparable with the more powerful edge-case attack in some cases, e.g., a backdoor label of ‘Dog’.
Method DLP Semantic Edge-case
C.4 EXPERIMENTAL RESULTS UNDER DEFENSE MECHANISMS
In this section, we test the proposed method under certain defenses. As mentioned in the main text, because of the weak threat model of (centralized) clean-sample backdoor attacks, many existing defenses against perturbation-based attacks are inappropriate for defending against our attacks.
But defenses against label flipping attacks, e.g., label sanitization (Paudice et al., 2018) and label ceritication (Rosenfeld et al., 2020) can be suitably tailored to defend against our method. The results are summarized in Fig. 6 and Fig. 7 respectively. It can be concluded from the figure that the proposed method can escape the two defenses. | 1. What is the focus of the paper regarding backdoor attacks?
2. What are the strengths and weaknesses of the proposed method?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are the concerns regarding the presentation of the paper, particularly in explaining the proposed method?
5. Can you elaborate on why Assumption 1 is practical?
6. How does the proposed clean-sample threat model differ from existing methods in terms of requirements?
7. What are the limitations of the experimental results, and what baselines should be included for better comparison?
8. What are the potential transferability and computational burdens of the backdoor samples between models?
9. Should the paper discuss feature collision attacks that share similarities with the current approach?
10. Could the reviewer provide additional suggestions for improving the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a new backdoor attack. Under the proposed threat model, the attacker aims to use semantically similar images to the target class to activate the backdoor. For instance, green cars can be used as semantically similar images to frogs in CIFAR-10. As such, one may re-label those green car images into frogs to create a poisoned data set. The paper argues that under this regime, known as clean-sample backdoor attacks, the attacker requires a weaker set of requirements and can only activate the backdoor without modification of the underlying test samples. This contrasts perturbation-based backdoor attacks that need to attach triggers during test time. To find semantically similar images for a particular target class, the paper uses a score function over the entire set of data that assigns a value between 0 and 1 to any specific input. This score function is supposed to indicate which samples (that come from a non-target class) have similar features to the target class so that they can be used as backdoors. This function is trained in conjunction with the target neural network and is used to select a handful of samples as the backdoor data. Theoretically, the paper shows that as the number of training samples grows, the generalization bounds between the optimal solutions for benign and backdoor empirical risks approach zero. Experimental results over Fashion-MNIST, CIFAR-10, and GTSRB are provided.
Strengths And Weaknesses
Strengths:
The proposed method seems interesting. In particular, it is intriguing to see that one can find semantically similar images to the target class and use those samples as the backdoor activation.
The theoretical analysis seems relevant to the paper.
The experimental results, especially Figures 3 and 4, beautifully depict the overall idea of the paper.
Weaknesses:
A major concern with the current version is the presentation of the paper, especially the proposed method. Many aspects of the proposed method are left to the reader's interpretation, and the paper does not discuss them properly. These include:
In Eq. (1), for the first time the reader faces the objective function for training the scoring mechanism
g
W
(
x
i
)
. However, the paper does not properly discuss the objective function's meaning. In particular, what does Eq. (1) imply beyond mathematical terms? Please explain this in accessible language to the readers. Additionally, the symbol
m
is used in this equation for the first time. However, its definition is left toward the end of Eq. (2) in the next page.
In the definition of
P
m
(
W
)
, it takes the reader a while to figure out the equivalency of the minimizer of
P
m
(
W
)
with the constraints in Eq. (2). The paper should mention that
P
m
(
W
)
is always non-negative, and why is that the case? Discuss how the second term
∑
j
=
1
n
g
W
(
x
j
)
(
1
−
g
W
(
x
j
)
)
is always non-negative and remind the reader that you assumed
0
≤
g
W
(
x
j
)
≤
1
.
A similar pattern can be seen during the theoretical results in Sec. 4.2. While the theory is mathematically presented, it would be better to accompany it with further discussions to let the readers grasp what those theorems mean in simplistic language. Also, could you please elaborate on why Assumption 1 is practical?
Another critical issue with the current work is its motivations. In Sec. 3.1. it is mentioned that the proposed clean-sample threat model is weaker in terms of its requirements. However, the current attack requires being present during training, which makes it easier for the attacker. The previous backdoor poisonings could be used to poison the data only without requiring any control over the training mechanism. As such, the requirements of the current attack are not weaker than existing methods. Besides, there is a great body of literature on feature collision attacks that share many characteristics with the current approach, e.g., see [1,2]. However, none of them are mentioned and discussed in the paper.
Finally, the experimental results need more investigations and baselines. The current method is indeed inherently different from perturbation-based backdoor attacks, but comparing the proposed method against them would help the readers understand the bigger picture. More importantly, a comparison on existing baselines for the detection of backdoor samples could show how much the selected backdoor samples in the paper are successful in circumventing the existing defense mechanisms. Also, investigating the transferability of the backdoor samples between models is another important venue. In other words, what does happen if one generates a poisoned dataset for a model
f
1
(
⋅
)
but trains a classifier
f
2
(
⋅
)
with that data? Could the backdoors be transferred between models? Additionally, what are the proposed method's computational burdens compared to vanilla training? Finally, a detailed investigation of the effects of different hyper-parameters on the model's performance over tasks I-III is also missing.
Minor comment: The remark mentioned in footnote 1 is inaccurate. One could poison the training data with perturbation-based triggers and maintain the training data size. For instance, see [3].
[1] Shafahi, Ali, et al. "Poison frogs! targeted clean-label poisoning attacks on neural networks," NeurIPS, 2018.
[2] Saha, Aniruddha, et al. "Hidden trigger backdoor attacks," AAAI, 2020.
[3] Turner, Alexander, et al. "Label-consistent backdoor attacks," arXiv preprint arXiv:1912.02771 (2019).
Clarity, Quality, Novelty And Reproducibility
Detailed comments are given in the Strength And Weaknesses. Below is a summary:
Clarity: the paper needs some work to explain the proposed approach in an accessible language. For further info, please see the first part of the weaknesses.
Quality: as mentioned above, the paper requires more straightforward language, better motivation, and further experimentation. These issues place the paper in a moderate quality.
Novelty And Reproducibility: The paper presents an interesting attack in terms of novelty. However, different aspects of the current work need to be investigated better. Furthermore, in terms of reproducibility, the paper neither provides the code nor a detailed set of hyper-parameters for the results to be re-created. |
ICLR | Title
What they do when in doubt: a study of inductive biases in seq2seq learners
Abstract
Sequence-to-sequence (seq2seq) learners are widely used, but we still have only limited knowledge about what inductive biases shape the way they generalize. We address that by investigating how popular seq2seq learners generalize in tasks that have high ambiguity in the training data. We use four new tasks to study learners’ preferences for memorization, arithmetic, hierarchical, and compositional reasoning. Further, we connect to Solomonoff’s theory of induction and propose to use description length as a principled and sensitive measure of inductive biases. In our experimental study, we find that LSTM-based learners can learn to perform counting, addition, and multiplication by a constant from a single training example. Furthermore, Transformer and LSTM-based learners show a bias toward the hierarchical induction over the linear one, while CNN-based learners prefer the opposite. The latter also show a bias toward a compositional generalization over memorization. Finally, across all our experiments, description length proved to be a sensitive measure of inductive biases.
N/A
Sequence-to-sequence (seq2seq) learners are widely used, but we still have only limited knowledge about what inductive biases shape the way they generalize. We address that by investigating how popular seq2seq learners generalize in tasks that have high ambiguity in the training data. We use four new tasks to study learners’ preferences for memorization, arithmetic, hierarchical, and compositional reasoning. Further, we connect to Solomonoff’s theory of induction and propose to use description length as a principled and sensitive measure of inductive biases. In our experimental study, we find that LSTM-based learners can learn to perform counting, addition, and multiplication by a constant from a single training example. Furthermore, Transformer and LSTM-based learners show a bias toward the hierarchical induction over the linear one, while CNN-based learners prefer the opposite. The latter also show a bias toward a compositional generalization over memorization. Finally, across all our experiments, description length proved to be a sensitive measure of inductive biases.
1 INTRODUCTION
Sequence-to-sequence (seq2seq) learners (Sutskever et al., 2014) demonstrated remarkable performance in machine translation, story generation, and open-domain dialog (Sutskever et al., 2014; Fan et al., 2018; Adiwardana et al., 2020). Yet, these models have been criticized for requiring a tremendous amount of data and being unable to generalize systematically (Dupoux, 2018; Loula et al., 2018; Lake & Baroni, 2017; Bastings et al., 2018). In contrast, humans rely on their inductive biases to generalize from a limited amount of data (Chomsky, 1965; Lake et al., 2019). Due to the centrality of humans’ biases in language learning, several works have studied inductive biases of seq2seq models and connected their poor generalization to the lack of the “right” biases (Lake & Baroni, 2017; Lake et al., 2019).
In this work, we focus on studying inductive biases of seq2seq models. We start from an observation that, generally, multiple explanations can be consistent with a limited training set, each leading to different predictions on unseen data. A learner might prefer one type of explanations over another in a systematic way, as a result of its inductive biases (Ritter et al., 2017; Feinman & Lake, 2018).
To illustrate the setup we work in, consider a quiz-like question: if f(3) maps to 6, what does f(4) map to? The “training” example is consistent with the following answers: 6 (f(x) ≡ 6); 7 (f(x) = x + 3); 8 (f(x) = 2 · x); any number z, since we always can construct a function such that f(3) = 6 and f(4) = z. By analyzing the learner’s output on this new input, we can infer its biases.
This example demonstrates how biases of learners are studied through the lenses of the poverty of the stimulus principle (Chomsky, 1965; 1980): if nothing in the training data indicates that a learner should generalize in a certain way, but it does nonetheless, then this is due to the biases of the learner. Inspired by the work of Zhang et al. (2019) in the image domain, we take this principle to the extreme and study biases of seq2seq learners in the regime of very few training examples, often as little as one. Under this setup, we propose four new synthetic tasks that probe seq2seq learners’ preferences to memorization-, arithmetic-, hierarchical- and compositional-based “reasoning”.
∗Equal contribution.
Next, we connect to the ideas of Solomonoff’s theory of induction (Solomonoff, 1964) and Minimal Description Length (Rissanen, 1978; Grunwald, 2004) and propose to use description length, under a learner, as a principled measure of its inductive biases.
Our experimental study1 shows that the standard seq2seq learners have strikingly different inductive biases. We find that LSTM-based learners can learn non-trivial counting-, multiplication-, and addition-based rules from as little as one example. CNN-based seq2seq learners would prefer linear over hierarchical generalizations, while LSTM-based ones and Transformers would do just the opposite. When investigating the compositional reasoning, description length proved to be a sensitive measure. Equipped with it, we found that CNN-, and, to a lesser degree, LSTM-based learners prefer compositional generalization over memorization when provided with enough composite examples. In turn, Transformers show a strong bias toward memorization.
2 SEARCHING FOR INDUCTIVE BIASES
To formalize the way we look for inductive biases of a learnerM, we consider a training dataset of input/output pairs, T = {xi, yi}ni=1, and a hold-out set of inputs, H = {xi}ki=n+1. W.l.o.g, we assume that there are two candidate “rules” that explain the training data, but do not coincide on the hold-out data: C1(xi) = C2(xi) = yi, 1 ≤ i ≤ n and ∃i : C1(xi) 6= C2(xi), n + 1 ≤ i ≤ k. To compare preferences of a learnerM toward those two rules, we fit the learner on the training data T and then compare its predictions on the hold-out data H to the outputs of the rules. We refer to this approach as “intuitive”. Usually, the measures of similarity between the outputs are task-specific: McCoy et al. (2020) used accuracy of the first term, Zhang et al. (2019) used correlation and MSE, and Lake & Baroni (2017) used accuracy calculated on the entire output sequence.
We too start with an accuracy-based measure. We define the fraction of perfect agreement (FPA) between a learnerM and a candidate generalization rule C as the fraction of seeds that generalize perfectly in agreement with that rule on the hold-out set H . Larger FPA of M is w.r.t. C, more biasedM is toward C. However, FPA does not account for imperfect generalization nor allows direct comparison between two candidate rules when both are dominated by a third candidate rule. Hence, below we propose a principled approach based on the description length.
Description Length and Inductive Biases At the core of the theory of induction (Solomonoff, 1964) is the question of continuation of a finite string that is very similar to our setup. Indeed, we can easily re-formulate our motivating example as a string continuation problem: “3→ 6; 4→”. The solution proposed by Solomonoff (1964) is to select the continuation that admits “the simplest explanation” of the entire string, i.e. that is produced by programs of the shortest length (description length).
Our intuition is that when a continuation is “simple” for a learner, then this learner is biased toward it. We consider a learner M to be biased toward C1 over C2 if the training set and its extension according to C1 has a shorter description length (forM) compared to that of C2. Denoting description length of a dataset D under the learnerM as LM(D), we hypothesise that if LM({C1(xi)}ki=1) < LM({C2(xi)}ki=1), thenM is biased toward C1. Calculating Description Length To find the description length of data under a fixed learner, we use the online (prequential) code (Rissanen, 1978; Grunwald, 2004; Blier & Ollivier, 2018).
The problem of calculating LM(D), D = {xi, yi}ki=1 is considered as a problem of transferring outputs yi one-by-one, in a compressed form, between two parties, Alice (sender) and Bob (receiver). Alice has the entire dataset {xi, yi}, while Bob only has inputs {xi}. Before the transmission starts, both parties agreed on the initialization of the modelM, order of the inputs {x}, random seeds, and the details of the learning procedure. Outputs {yi} are sequences of tokens from a vocabulary V . W.l.o.g. we fix some order over {x}. We assume that, given x, the learnerM produces a probability distribution over the space of the outputs y, pM(y|x).
1Code used in the experiments can be found at https://github.com/facebookresearch/FIND.
The very first output y1 can be sent by using not more than c = |y1| · log |V | nats, using a naïve encoding.2 After that, both Alice and Bob update their learners using the example (x1, y1), available to both of them, and get identical instances ofM1. Further transfer is done iteratively under the invariant that both Alice and Bob start every step t with exactly the same learnersMt−1 and finish with identicalMt. At step t Alice would useMt−1 to encode the next output yt. This can be done using ( − log pMt−1(yt|xt) ) nats (MacKay, 2003). Since Bob has exactly the same model, he can decode the message to obtain yt and use the new pair (xt, yt) to update his model and get Mt. Alice also updates her model, and proceeds to sending the next yt+1 (if any), encoding it with the help ofMt. The cumulative number of nats transmitted is:
LM(D) = − k∑
t=2
log pMt−1(yt|xt) + c. (1)
The obtained code length of Eq. 1 depends on the order in which y are transmitted and the procedure we use to updateM. To account for that, we average out the data order by training with multiple random seeds. Further, for larger datasets, full re-training after adding a new example is impractical and, in such cases, examples can be transmitted in blocks.
If we measure the description length of the training data T shuffled with the hold-out data H , both datasets would have symmetric roles. However, there is a certain asymmetry in the extrapolation problem: we are looking for an extrapolation from T , not vice-versa. To break this symmetry, we always transmit outputs for the entire training data as the first block.
While LM(D) is seemingly different from the “intuitive” measures introduced before, we can illustrate their connection as follows. Consider a case where we first transmit the training outputs as the first block and all of the hold-out data outputs under C, C(H), as the second block. Then the description length is equal to cross-entropy of the trained learner on the hold-out data, recovering a process akin to the “intuitive” measuring of inductive biases. With smaller blocks, the description length also catches whether a learner is capable of finding regularity in the data fast, with few data points; hence it also encodes the speed-of-learning ideas for measuring inductive biases (Chaabouni et al., 2019).
Finally, the description length has three more attractive properties when measuring inductive biases: (a) it is domain-independent, i.e. can be applied for instance in the image domain, (b) it allows comparisons across models that account for model complexity, and (c) it enables direct comparison between two candidate rules (as we will show in the Section 5).
3 TASKS
We describe four tasks that we use to study inductive biases of seq2seq learners. We select those tasks to cover different aspects of learners’ behavior. Each task has a highly ambiguous training set which is consistent with infinite number of generalizations. We pre-select several candidate rules highlighting biases that are useful for language processing and known to exist in humans, or are otherwise reasonable. Our experiments show that these rules cover many cases of the learners’ behavior.
The first two tasks study biases in arithmetic reasoning: Count-or-Memorization quantifies learners’ preference for counting vs. a simple memorization and Add-or-Multiply further probes the learners’ preferences in arithmetic operations. We believe these tasks are interesting, as counting is needed in some NLP problems like processing linearized parse trees (Weiss et al., 2018). The third task, Hierarchical-or-Linear, contrasts hierarchical and linear inductive reasoning. The hierarchical reasoning bias is believed to be fundamental in learning some syntactical rules in human acquisition of syntax (Chomsky, 1965; 1980). Finally, with the Composition-or-Memorization task, we investigate biases for systematic compositionality, which are central for human capabilities in language. Figure 1 illustrates these four tasks.
2As we are interested in comparing candidate generalization rules, the value of the additive constant c is not important, as it is learner- and candidate-independent. In experiments, we subtract it from all measurements.
Published as a conference paper at ICLR 2021
<latexit sha1_base64="2aP1qHe7cCIfZJZf006ToyPFAlM=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimTNNuGZrNLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZRxLBtfG8b6ewtr6xuVXcLu3s7u0fuIdHLR2nirImjUWsOgQ0E1yypuFGsE6iGEREsDYZ38z89gNTmsfy3mQJCyIYSh5yCsZKfbcMALin+HBkQKn4ERNC+m7Fq3pz4FXi56SCcjT67ldvENM0YtJQAVp3fS8xwQSU4VSwaamXapYAHcOQdS2VEDEdTObHT/GpVQY4jJUtafBc/T0xgUjrLCK2MwIz0sveTPzP66YmvAwmXCapYZIuFoWpwCbGsyTwgCtGjcgsAaq4vRXTESigxuZVsiH4yy+vktZ51a9Vr+5qlfp1HkcRHaMTdIZ8dIHq6BY1UBNRlKFn9IrenCfnxXl3PhatBSefKaM/cD5/AGgPlKM=</latexit>
<latexit sha1_base64="zC7LQM7YZYuoBe5SPPwHFlyMIso=">AAACB3icbVDLSsNAFJ34rPUVdSnIYBFchUQq6sqiG5cV7AOaUG6mk2bo5MHMRCmhOzf+ihsXirj1F9z5N07bLLT1wIXDOfdy7z1+yplUtv1tLCwuLa+sltbK6xubW9vmzm5TJpkgtEESnoi2D5JyFtOGYorTdiooRD6nLX9wPfZb91RIlsR3aphSL4J+zAJGQGmpax4AYFewfqhAiOQBu6FMgdDcsk9JNLrsmhXbsifA88QpSAUVqHfNL7eXkCyisSIcpOw4dqq8HIRihNNR2c0k1QsG0KcdTWOIqPTyyR8jfKSVHg4SoStWeKL+nsghknIY+bozAhXKWW8s/ud1MhWcezmL00zRmEwXBRnHKsHjUHCPCUoUH2oCRDB9KyYhCCBKR1fWITizL8+T5onlVK2L22qldlXEUUL76BAdIwedoRq6QXXUQAQ9omf0it6MJ+PFeDc+pq0LRjGzh/7A+PwBSTCY9A==</latexit>
<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>
<latexit sha1_base64="fz5x21CLleIxoxCf0rdZAKFE5ls=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj25nffkKleSwfzSRBP6JDyUPOqLHSQxD0yxW36s5BVomXkwrkaPTLX71BzNIIpWGCat313MT4GVWGM4HTUi/VmFA2pkPsWipphNrP5pdOyZlVBiSMlS1pyFz9PZHRSOtJFNjOiJqRXvZm4n9eNzXhlZ9xmaQGJVssClNBTExmb5MBV8iMmFhCmeL2VsJGVFFmbDglG4K3/PIqaV1UvVr1+r5Wqd/kcRThBE7hHDy4hDrcQQOawCCEZ3iFN2fsvDjvzseiteDkM8fwB87nD4KejV4=</latexit>
<latexit sha1_base64="/YFAJknAn2U8M0GNie5D5uxl4Tw=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimzabYNzSZLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3ds/cA+PWlqmitAmkVyqTgiaciZo0zDDaSdRFOKQ03Y4vpn57QeqNJPi3mQJDWIYChYxAsZKfbcMEALgnmLDkQGl5CMO+27Fq3pz4FXi56SCcjT67ldvIEkaU2EIB627vpeYYALKMMLptNRLNU2AjGFIu5YKiKkOJvPjp/jUKgMcSWVLGDxXf09MINY6i0PbGYMZ6WVvJv7ndVMTXQYTJpLUUEEWi6KUYyPxLAk8YIoSwzNLgChmb8VkBAqIsXmVbAj+8surpHVe9WvVq7tapX6dx1FEx+gEnSEfXaA6ukUN1EQEZegZvaI358l5cd6dj0VrwclnyugPnM8fZqiUog==</latexit>
<latexit sha1_base64="4YyIpL4gWY0aiFz+Nr1J+DsIshc=">AAACCXicbVC7SgNBFJ2Nrxhfq5Y2g0GwWnYlolYGbSwjmAckS7g7mSRDZmaXmVklLGlt/BUbC0Vs/QM7/8bJo9DEAxcO59zLvfdECWfa+P63k1taXlldy68XNja3tnfc3b2ajlNFaJXEPFaNCDTlTNKqYYbTRqIoiIjTejS4Hvv1e6o0i+WdGSY0FNCTrMsIGCu1XQwR4JZivb4BpeIH3OrrBAjNfO9UiBG+bLtF3/MnwIskmJEimqHSdr9anZikgkpDOGjdDPzEhBkowwino0Ir1dRuGECPNi2VIKgOs8knI3xklQ7uxsqWNHii/p7IQGg9FJHtFGD6et4bi/95zdR0z8OMySQ1VJLpom7KsYnxOBbcYYoSw4eWAFHM3opJHxQQY8Mr2BCC+ZcXSe3EC0rexW2pWL6axZFHB+gQHaMAnaEyukEVVEUEPaJn9IrenCfnxXl3PqatOWc2s4/+wPn8AX7kmZQ=</latexit>
<latexit sha1_base64="bihfQSgohUCaiZLH8E/K/puHk0I=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmrRfrrhVdw6ySrycVCBHo1/+6g1ilkYoDRNU667nJsbPqDKcCZyWeqnGhLIxHWLXUkkj1H42P3RKzqwyIGGsbElD5urviYxGWk+iwHZG1Iz0sjcT//O6qQmv/IzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaiijJjsynZELzll1dJ+6Lq1arXzVqlfpPHUYQTOIVz8OAS6nAHDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fxz2M8Q==</latexit>
<latexit sha1_base64="9Y1xYV9hakFu8xV0cEnjrRUGU9E=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmkG/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1Wvm7VK/SaPowgncArn4MEl1OEOGtACBgjP8ApvzqPz4rw7H4vWgpPPHMMfOJ8/yMGM8g==</latexit>
<latexit sha1_base64="9Y1xYV9hakFu8xV0cEnjrRUGU9E=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmkG/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1Wvm7VK/SaPowgncArn4MEl1OEOGtACBgjP8ApvzqPz4rw7H4vWgpPPHMMfOJ8/yMGM8g==</latexit>
<latexit sha1_base64="jnXbe6btQmwXgibKAN2VhSH1B5w=">AAAB+HicbVBNSwMxEM3Wr1o/uurRS7AInsquFNRb0YvHCvYD2qXMptk2NJssSVapS3+JFw+KePWnePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3dsvuweHLS1TRWiTSC5VJwRNORO0aZjhtJMoCnHIaTsc38z89gNVmklxbyYJDWIYChYxAsZKfbcMuKfYcGRAKfmIoe9WvKo3B14lfk4qKEej7371BpKkMRWGcNC663uJCTJQhhFOp6VeqmkCZAxD2rVUQEx1kM0Pn+JTqwxwJJUtYfBc/T2RQaz1JA5tZwxmpJe9mfif101NdBlkTCSpoYIsFkUpx0biWQp4wBQlhk8sAaKYvRWTESggxmZVsiH4yy+vktZ51a9Vr+5qlfp1HkcRHaMTdIZ8dIHq6BY1UBMRlKJn9IrenCfnxXl3PhatBSefOUJ/4Hz+AGnJkvQ=</latexit>
<latexit sha1_base64="y/odGoA3GMubqSuaA2hvPYvW6gw=">AAAB+HicbVBNSwMxEM3Wr1o/uurRS7AInsquFNRb0YvHCvYD2qVk02wbmk2WZFapS3+JFw+KePWnePPfmLZ70NYHA4/3ZpiZFyaCG/C8b6ewtr6xuVXcLu3s7u2X3YPDllGppqxJlVC6ExLDBJesCRwE6ySakTgUrB2Ob2Z++4Fpw5W8h0nCgpgMJY84JWClvlsOcU/z4QiI1uoRh3234lW9OfAq8XNSQTkafferN1A0jZkEKogxXd9LIMiIBk4Fm5Z6qWEJoWMyZF1LJYmZCbL54VN8apUBjpS2JQHP1d8TGYmNmcSh7YwJjMyyNxP/87opRJdBxmWSApN0sShKBQaFZyngAdeMgphYQqjm9lZMR0QTCjarkg3BX355lbTOq36tenVXq9Sv8ziK6BidoDPkowtUR7eogZqIohQ9o1f05jw5L86787FoLTj5zBH6A+fzB2zfkvY=</latexit>
<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>
<latexit sha1_base64="dl6D95tdDzd51TNAft5O5SVEngo=">AAACEXicbVC7SgNBFJ31GeMrammzGAQbw64E1C6YxkaIYB6QhDA7uUkG57HM3BXjkl+w8VdsLBSxtbPzb9xsUmjigWEO55zLzD1BKLhFz/t2FhaXlldWM2vZ9Y3Nre3czm7N6sgwqDIttGkE1ILgCqrIUUAjNEBlIKAe3JbHfv0OjOVa3eAwhLakfcV7nFFMpE7uqIVwj0bG6c0xLutI4bE2x1cgteEPaW40ynZyea/gpXDniT8leTJFpZP7anU1iyQoZIJa2/S9ENsxNciZgFG2FVkIKbulfWgmVFEJth2nG43cw0Tpuj1tkqPQTdXfEzGV1g5lkCQlxYGd9cbif14zwt5ZO+YqjBAUmzzUi4SL2h3X43a5AYZimBDKDE/+6rIBNZRhUuK4BH925XlSOyn4xcL5dTFfupjWkSH75IAcEZ+ckhK5JBVSJYw8kmfySt6cJ+fFeXc+JtEFZzqzR/7A+fwBMTKegQ==</latexit>
<latexit sha1_base64="AU3h0Xycm3+2dblNcl19kE3syhg=">AAACF3icbVC7SgNBFJ31GeMrammzGASbhF0JqF0wjY0QwTwgCWF2cpMMmdlZZu6Kcclf2PgrNhaK2Grn37i7SaGJB4Y5nHMul3u8QHCDjvNtLS2vrK6tZzaym1vbO7u5vf26UaFmUGNKKN30qAHBfaghRwHNQAOVnoCGN6okfuMOtOHKv8VxAB1JBz7vc0Yxlrq5YhvhHrWM0p9jVFEyUIYnbkHpwjVIpflDmp5Mst1c3ik6KexF4s5InsxQ7ea+2j3FQgk+MkGNablOgJ2IauRMwCTbDg0ElI3oAFox9akE04nSuyb2caz07L7S8fPRTtXfExGVxoylFyclxaGZ9xLxP68VYv+8E3E/CBF8Nl3UD4WNyk5KsntcA0MxjgllOi6D2WxINWUYV5mU4M6fvEjqp0W3VLy4KeXLl7M6MuSQHJET4pIzUiZXpEpqhJFH8kxeyZv1ZL1Y79bHNLpkzWYOyB9Ynz9YnKFI</latexit>
<latexit sha1_base64="z+j/XVE4K8yKa/1sdHAWANdl7aI=">AAACEnicbVC7TgJBFJ31ifhatbTZSEy0gOwaErUj2lBYYCKPBAiZHS4wYWZ3M3PXSDZ8g42/YmOhMbZWdv6Ns0Ch4Ekmc3LOvXfmHj8SXKPrfltLyyura+uZjezm1vbOrr23X9NhrBhUWShC1fCpBsEDqCJHAY1IAZW+gLo/vE79+j0ozcPgDkcRtCXtB7zHGUUjdezTFsIDKplMbo5JmYOiig1MhciHKn9jBlM1Hmc7ds4tuBM4i8SbkRyZodKxv1rdkMUSAmSCat303AjbCVXImYBxthVriCgb0j40DQ2oBN1OJiuNnWOjdJ1eqMwJ0JmovzsSKrUeSd9USooDPe+l4n9eM8beRTvhQRQjBGz6UC8WDoZOmo/T5QoYipEhlClu/uqwAVWUoUkxDcGbX3mR1M4KXrFweVvMla5mcWTIITkiJ8Qj56REyqRCqoSRR/JMXsmb9WS9WO/Wx7R0yZr1HJA/sD5/AILSnp4=</latexit>
<latexit sha1_base64="Qjee5BVEq+vKsWYHnWbnBQlIpYA=">AAACC3icbVDLSsNAFJ34rPUVdekmtAhuWhIpqLuqGzdCBfuANpTJZNIOnUnCzI0YQvdu/BU3LhRx6w+4829M0iy09cDlHs65l5l7nJAzBab5rS0tr6yurZc2yptb2zu7+t5+RwWRJLRNAh7InoMV5cynbWDAaS+UFAuH064zucr87j2VigX+HcQhtQUe+cxjBEMqDfXKAOgDSJHknUFy4bq1QNZuIg4s5PF0Wh7qVbNu5jAWiVWQKirQGupfAzcgkaA+EI6V6ltmCHaCJTDC6bQ8iBQNMZngEe2n1MeCKjvJb5kaR6niGl4g0/LByNXfGwkWSsXCSScFhrGa9zLxP68fgXdmJ8wPI6A+mT3kRdyAwMiCMVwmKQEepwQTydK/GmSMJSaQxpeFYM2fvEg6J3WrUT+/bVSbl0UcJXSIKugYWegUNdE1aqE2IugRPaNX9KY9aS/au/YxG13Sip0D9Afa5w/83pun</latexit>
<latexit sha1_base64="a4iN37u8WAC9fwuLOaFSCqDHf/w=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimzabYNzSZLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3ds/cA+PWlqmitAmkVyqTgiaciZo0zDDaSdRFOKQ03Y4vpn57QeqNJPi3mQJDWIYChYxAsZKfbcMgHuKDUcGlJKPOLTouxWv6s2BV4mfkwrK0ei7X72BJGlMhSEctO76XmKCCSjDCKfTUi/VNAEyhiHtWiogpjqYzI+f4lOrDHAklS1h8Fz9PTGBWOssDm1nDGakl72Z+J/XTU10GUyYSFJDBVksilKOjcSzJPCAKUoMzywBopi9FZMRKCDG5lWyIfjLL6+S1nnVr1Wv7mqV+nUeRxEdoxN0hnx0geroFjVQExGUoWf0it6cJ+fFeXc+Fq0FJ58poz9wPn8AaY6UpA==</latexit>
<latexit sha1_base64="MdWGCyoEtQk4iPFMQu2DAITPSsU=">AAACBnicbVDLSsNAFJ34rPUVdSnCYBFchUQq6sqiG5cV7AOaUCbTSTN0ZhJmJkoJXbnxV9y4UMSt3+DOv3HaZqGtBy4czrmXe+8JU0aVdt1va2FxaXlltbRWXt/Y3Nq2d3abKskkJg2csES2Q6QIo4I0NNWMtFNJEA8ZaYWD67HfuidS0UTc6WFKAo76gkYUI22krn2AoC9pP9ZIyuQB+rFKESa5455iPrrs2hXXcSeA88QrSAUUqHftL7+X4IwToTFDSnU8N9VBjqSmmJFR2c8UMQsGqE86hgrEiQryyRsjeGSUHowSaUpoOFF/T+SIKzXkoenkSMdq1huL/3mdTEfnQU5Fmmki8HRRlDGoEzjOBPaoJFizoSEIS2puhThGEmFtkiubELzZl+dJ88Txqs7FbbVSuyriKIF9cAiOgQfOQA3cgDpoAAwewTN4BW/Wk/VivVsf09YFq5jZA39gff4AheWYiQ==</latexit>
<latexit sha1_base64="Skuwggtv7n9iurfu6b6dnycPw7E=">AAAB63icbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWsB/QLiWbZtvQJLskWaEs/QtePCji1T/kzX9jtt2Dtj4YeLw3w8y8MBHcWM/7RqW19Y3NrfJ2ZWd3b/+genjUNnGqKWvRWMS6GxLDBFesZbkVrJtoRmQoWCec3OV+54lpw2P1aKcJCyQZKR5xSmwuhQ6Das2re3PgVeIXpAYFmoPqV38Y01QyZakgxvR8L7FBRrTlVLBZpZ8alhA6ISPWc1QRyUyQzW+d4TOnDHEUa1fK4rn6eyIj0pipDF2nJHZslr1c/M/rpTa6DjKuktQyRReLolRgG+P8cTzkmlErpo4Qqrm7FdMx0YRaF0/FheAvv7xK2hd1/7J+83BZa9wWcZThBE7hHHy4ggbcQxNaQGEMz/AKb0iiF/SOPhatJVTMHMMfoM8f93WONg==</latexit>
<latexit sha1_base64="fz5x21CLleIxoxCf0rdZAKFE5ls=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj25nffkKleSwfzSRBP6JDyUPOqLHSQxD0yxW36s5BVomXkwrkaPTLX71BzNIIpWGCat313MT4GVWGM4HTUi/VmFA2pkPsWipphNrP5pdOyZlVBiSMlS1pyFz9PZHRSOtJFNjOiJqRXvZm4n9eNzXhlZ9xmaQGJVssClNBTExmb5MBV8iMmFhCmeL2VsJGVFFmbDglG4K3/PIqaV1UvVr1+r5Wqd/kcRThBE7hHDy4hDrcQQOawCCEZ3iFN2fsvDjvzseiteDkM8fwB87nD4KejV4=</latexit>
<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>
<latexit sha1_base64="qnFGHcCsleGiOXNIMFDZUtpqE2A=">AAACHnicbVDJSgNBEO1xN25Rj14ag+DFYcYF9SZ68ahgVEhCqOlUMo3dM0N3jRKGfIkXf8WLB0UET/o3dmIEt9c0vHpVRVW9KFPSUhC8eyOjY+MTk1PTpZnZufmF8uLSuU1zI7AqUpWaywgsKplglSQpvMwMgo4UXkRXR/38xTUaK9PkjLoZNjR0EtmWAshJzfIOxUYKrMc2A4FF4O9o3YOvcCPwt1xcN7ITExiT3nDov2a5EvjBAPwvCYekwoY4aZZf661U5BoTEgqsrYVBRo0CDEmhsFeq5xbdxCvoYM3RBDTaRjE4r8fXnNLi7dS4nxAfqN87CtDWdnXkKjVQbH/n+uJ/uVpO7b1GIZMsJ0zE56B2rjilvO8Vb0mDglTXERBGul25iMGAIOdoyZkQ/j75Lznf9MNtf/90u3JwOLRjiq2wVbbOQrbLDtgxO2FVJtgtu2eP7Mm78x68Z+/ls3TEG/Yssx/w3j4AA+KiZg==</latexit>
<latexit sha1_base64="gbmyyfTl9bxXUBY/8c/CKSNznlI=">AAACKXicbVBNS8NAEN34WetX1KOXxSJ4sSRSUU8WvXhUsCq0pWy202ZxNwm7E6WE/h0v/hUvCop69Y+4TSP49WDhzXszzM4LEikMet6bMzE5NT0zW5orzy8sLi27K6sXJk41hwaPZayvAmZAiggaKFDCVaKBqUDCZXB9PPIvb0AbEUfnOEigrVg/Ej3BGVqp49Yx1IJDKzQJ45B51V2lhsFXuT2uW1r0Q2Rax7f0y/Jzhx523IpX9XLQv8QvSIUUOO24T61uzFMFEXLJjGn6XoLtjGkUXMKw3EoN2A3XrA9NSyOmwLSz/NIh3bRKl/ZibV+ENFe/T2RMGTNQge1UDEPz2xuJ/3nNFHv77UxESYoQ8fGiXiopxnQUG+0KDRzlwBLGtbB/pTxkmnG04ZZtCP7vk/+Si52qX6senNUq9aMijhJZJxtki/hkj9TJCTklDcLJHXkgz+TFuXcenVfnfdw64RQza+QHnI9P8/inCQ==</latexit>
By ak we denote a sequence that contains token a repeated k times. For training, we represent sequences in a standard way: the tokens are one-hot-encoded separately, and we append a special end-of-sequence token to each sequence. Input and output vocabularies are disjoint.
Count-or-Memorization: In this task, we contrast learners’ preferences for counting vs. memorization. We train models to fit a single training example with input al and output bl (i.e., to perform the mapping al → bl) and test it on am with m ∈ [l−10, l+10]. If a learner learns the constant function, outputting bl independently of its inputs, then it follows the mem strategy. On the other hand, if it generalizes to the am → bm mapping, then the learner is biased toward the count strategy. Add-or-Multiply: This task is akin to the motivating example in Section 1. The single training example represents a mapping of an input string al to an output string b2l. As test inputs, we generate am for m in the interval [l − 3, l + 3]. We consider the learned rule to be consistent with mul if for all m, the input/output pairs are consistent with am → b2m. Similarly, if they are consistent with am → bm+l, we say that the learner follows the addition rule, add. Finally, the learner can learn a constant mapping am → b2l for any m. Again, we call this rule mem. Hierarchical-or-Linear: For a fixed depth d, we train learners on four training examples xdyxd → y where x, y ∈ {a, b}.3 Each training example has a nested structure, where d defines its depth. A learner with a hierarchical bias (hierar), would output the middle symbol. We also consider the linear rule (linear) in which the learner outputs the (d+1)th symbol of its input. To probe learners’ biases, we test them on inputs with different depths m ∈ [d− 2, d + 2]. Note that to examine the linear rule (i.e. if the learner outputs the (d+1)th symbol of any test input of depth m), we need m ≥ d2 . Similar to the previous tasks, there is no vocabulary sharing between a model’s inputs and outputs (input and output tokens a and b are different).
Composition-or-Memorization: We take inspiration from SCAN (Lake & Baroni, 2017), a benchmark used for studying systematic generalization of seq2seq learners.4 The input vocabulary has N symbols ai that are one-to-one mapped into N output symbols bi (i.e., ai → bi). In addition, there is a modifier token thrice: when thrice precedes an input symbol ai, the corresponding output is repeated three times: thrice ai → bibibi. We train a learner on all non-compositional examples (ai → bi) and M (M < N ) compositional examples (thrice ai → bibibi). At test time, we feed the learner with the remaining compositional examples (thrice ai, i > M ). If the learner generalizes to the mapping thrice ai → bibibi for i > M , we consider it to be biased toward a compositional reasoning (comp). As an alternative generalization,
3This mapping consist then on four combinations adbad → b; adaad → a; bdabd → a and bdbbd → b. 4In Appendix, we report a study of inductive biases on the SCAN data.
we consider a mapping where all inputs containing ai are mapped into bi: thrice ai → bi (i > M ). We call this generalization memorization (mem).
4 METHODOLOGY
4.1 SEQUENCE-TO-SEQUENCE LEARNERS
We experiment with three standard seq2seq models: LSTM-based seq2seq (LSTM-s2s) (Sutskever et al., 2014), CNN-based seq2seq (CNN-s2s) (Gehring et al., 2017), and Transformer (Vaswani et al., 2017). All share a similar Encoder/Decoder architecture (Sutskever et al., 2014).
LSTM-s2s Both Encoder and Decoder are implemented as LSTM cells (Hochreiter & Schmidhuber, 1997). Encoder encodes its inputs incrementally from left to right. We experiment with architectures without (LSTM-s2s no att.) and with (LSTM-s2s att.) an attention mechanism (Bahdanau et al., 2014). For the first three tasks, both Encoder and Decoder are single-layer LSTM cells with hidden size of 512 and embedding of dimension 16.
CNN-s2s Encoder and Decoder are convolutional networks (LeCun et al., 1990), followed by GLU non-linearities (Dauphin et al., 2017) and an attention layer. To represent positions of input tokens, CNN-s2s uses learned positional embeddings. Encoder and Decoder networks have one layer with 512 filters and a kernel width of 3. We set the embedding size to 16.
Transformer Encoder and Decoder are implemented as a sequence of (self-)attention and feedforward layers. We use sinusoidal position embeddings. Both Encoder and Decoder contain one transformer layer. The attention modules have 8 heads, feed-forward layers have dimension of 512 and the embedding is of dimension 16.
In Appendix we report experiments where we vary hyperparameters of the learners.
4.2 TRAINING AND EVALUATION
For all tasks, we follow the same training procedure. We train with Adam optimizer (Kingma & Ba, 2014) for 3000 epochs. The learning rate starts at 10−5 and increases for the first 1000 warm-up updates till reaching 10−3. We include all available examples in a single batch. We use teacher forcing (Goodfellow et al., 2016) and set the dropout probability to 0.5. For each learner, we perform training and evaluation 100 times, changing random seeds. At generation, to calculate FPA, we select the next token greedily. We use the model implementations from fairseq (Ott et al., 2019).
As discussed in Section 2, when calculating L, we use the training examples as the first transmitted block at t = 1. In Count-or-Memorization and Add-or-Multiply this block contains one example, in Hierarchical-or-Linear it has 4 examples, and in Composition-or-Memorization it has N + M examples. Next, we transmit examples obtained from the candidate rules in a randomized order, by blocks of size 1, 1, 1, and 4 for Count-or-Memorization, Add-or-Multiply, Composition-orMemorization, and Hierarchical-or-Linear respectively. At each step, the learner is re-trained from the same initialization, using the procedure and hyper-parameters as discussed above.
Our training setup is typical for seq2seq training. It does not include any additional pressure towards description length minimization. Description length is solely used as a measure of inductive biases.
5 EXPERIMENTS
Count-or-Memorization We investigate here learners’ biases toward count and mem rules. We provide a single example al → bl as the training set, varying l ∈ {10, 20, 30, 40}. We report the learners’ performances in Table 1a. We observe that, independently of the length of the training example l, CNN-s2s and Transformer learners inferred perfectly the mem rule with FPA-mem > 0.90 (i.e. more than 90% of the random seeds output bl for any given input am).
However, LSTM-based learners demonstrate a more complex behavior. With l = 10, both learners (with and without attention) exhibit a preference for mem. Indeed, while these learners rarely generalize perfectly to any of the hypothesis (0.0 FPA (no att.), 0.2/0.0 FPA for mem/count (att.)), they have significantly lower L-mem. As l increases, LSTM-based learners become more biased
toward count. Surprisingly, for l ≥ 30, most learner instances show sufficiently strong inductive biases to infer perfectly the non-trivial count hypothesis. With l = 40, 99% of random seeds of LSTM-s2s att. and all (100%) of LSTM-s2s no att. seeds generalized perfectly to count.
Further, we see that if L shows similar trends, it has a higher sensitivity. For example, while both LSTM-based learners have a similar FPA with l = 40, L demonstrates that LSTM-s2s no att. has a stronger count bias.
Add-or-Multiply In this task, we examine learners’ generalization after training on the single example al → b2l. We vary l ∈ {5, 10, 15, 20}. In Table 1b, we report FPA and L for the three generalization hypotheses, add, mul, and mem. We observe, similarly to the previous task, that CNN-s2s and Transformer learners always converge perfectly to memorization.
In contrast, LSTM-based learners show non-trivial generalizations. Examining first LSTM-s2s att., when l=5, we note that mem has a high FPA and an L considerably lower than others. This is consistent with the learner’s behavior in the Count-or-Memorization task. As we increase l, more interesting behavior emerges. First, L-mem decreases as l increases. Second, mul-type preference increases with l. Finally, L-add presents a U-shaped function of l. That is, for the medium example length l, the majority of learners switch to approximating the add rule (for l = 10). However, when l grows further, a considerable fraction of these learners start to follow a mul-type rule. Strikingly, 98% of LSTM-s2s att. seeds generalized perfectly to the non-trivial mul rule. As for LSTM-s2s no att., we do not observe a strong bias to infer any of the rules when l=5. However, when increasing l, the LSTM-s2s no att. behaves similarly to LSTM-s2s att.: at first it has a preference for add (FPA-add=0.95, for l=10) then for mul (e.g. FPA-mul=0.94, for l=20).
Hierarchical-or-Linear We look now at learners’ preference for either hierar or linear generalizations. The architectures we use were only able to consistently learn the training examples with the depth d not higher than 4. Hence, in this experiment, we set d to 4.
We report in Table 1c the learners’ FPA and L. We observe that CNN-s2s exhibits a strikingly different bias compared to all other learners with a perfect agreement with the linear rule. In contrast, Transformer learners show a clear preference for hierar with a high FPA (0.69) and a low L (1.21). Surprisingly, this preference increases with the embedding size and Transformers with embedding size ≥ 64 admit an FPA-hierar of 1.00 (see Appendix for more details). LSTM-s2s att. learners demonstrate also a similar preference for hierar with an FPA of 0.30 and a considerably lower L than L-hierar. Finally, while only 5% of LSTM-s2s no att. instances generalized to perfect hierar (and none to linear), L confirms their preference for the hierar hypothesis.
Composition-or-Memorization In this task, we set the number of primitives N = 40 and vary the number of compositional examples M ∈ {6, 24, 36} seen during training. Results are reported in Table 1d. First, we observe that FPA is only informative for CNN-s2s when trained with a large M . Indeed, for M = 6, CNN-s2s does not infer any of the candidate rules. However, according to description length L, we note a significant preference for mem over comp. More compositional examples CNN-based learners see at training, more biased they become toward comp. The remaining learners have zero FPA for both candidate rules. However, according to description length, LSTMbased learners have preferences similar to CNN-s2s, although weaker. That is, they show a preference for mem for low M , that declines in favor of comp as M increases. In contrast, Transformers show a strong bias for mem with all tested M .
Overall, across all the above experiments, we see that seq2seq learners demonstrate strikingly different biases. In many cases, these biases lead to non-trivial generalizations when facing ambiguity in the training data. This spans tasks that probe for memorization, arithmetic, hierarchical, and compositional reasoning. We found that a single example is sufficient for LSTM-based learners to learn counting, addition, and multiplication. Moreover, within the same task, they can switch from one explanation to another, depending on the training example length, with Addition-or-Multiplication being the task where this switch happens twice. In contrast, CNN-s2s and Transformers show a strong bias toward memorization. Furthermore, all learners except for CNN-s2s demonstrate a strong bias toward the hierarchical behavior. In the task of compositional generalization, CNN-s2s shows a strong bias toward compositional reasoning that appears after a few compositional training examples. On the other hand, Transformers show a preference for memorization over compositional generalization.
We see that the conclusions derived from comparing the description length of the candidate rules are in agreement with the results under accuracy-based metrics, but provide a more nuanced picture.
Robustness to hyper-parameters We observe that learners’ biases depend, in many cases, on the length/number of the input examples. In Appendix, we examine impact of other hyper-parameters. In particular, we study impact of (1) learners’ architecture size, by varying the number of layers, hidden and embedding sizes, and (2) dropout probability. Our results show that in some cases a learner’s architecture size can influence the strength of inductive biases, but rarely modify them: among the 136 tested settings, we observe only 3 cases of switching the preferences. We also found, in line with (Arpit et al., 2017), that large dropout probabilities can prevent mem-type generalization. Finally, in Appendix we show that a variant of Transformer learners, namely the joint source-target self-attention learner (He et al., 2018; Fonollosa et al., 2019), displays the same preferences as the standard Transformer learners. This variant resembles the “decoder-only” architecture used in language modeling (Radford et al., 2019; Brown et al., 2020). This result demonstrates that our tasks and bias measures could be applied for studying inductive biases of language model architectures.
6 RELATED WORK
Dessì & Baroni (2019) found that, unlike LSTM-s2s learners, CNN-s2s can perform compositional generalization on SCAN. Our experiments indicate that this only happens when enough compositional examples are provided in the training. Moreover, in such a case, attention-enabled LSTM-s2s also start to prefer compositional generalization over memorization.
McCoy et al. (2020) studied inductive biases of recurrent neural architectures in two synthetic tasks, English question formation and tense inflection. They found that only tree-based architectures show robust preference for hierarchical reasoning, in contrast to LSTM-s2s learners that generalized linearly. Our experiments on the hyperparameter robustness, reported in Appendix, indicate that the preferences over linear/hierarchical reasoning are strongly affected by the dropout probability, with learners shifting to linear behavior with low probabilities. As McCoy et al. (2020) experimented with a low dropout probability of 0.1, we believe this explains the misalignment of the conclusions.
Overall, our study shows that inductive biases are more complicated than it seemed in these prior works and a more careful analysis is crucial. We believe that our extremely controlled setup with very little confounds is a good addition to those studies.
Another line of research investigates theoretically learners’ capabilities, that is, the classes of the hypothesis that a learner can discover (Siegelmann & Sontag, 1992; Suzgun et al., 2019; Merrill et al., 2020). For example, Weiss et al. (2018) demonstrated that LSTM cells can count. In turn, we demonstrate that LSTM-s2s learners are not only capable but also biased toward arithmetic behavior.
7 DISCUSSION AND CONCLUSION
In this work, we studied inductive biases of standard seq2seq learners, Transformer-, LSTM-, and CNN-based. To do so, we introduced four new tasks, which allowed us to cover an interesting spectrum of behaviors useful for language learning. In particular, we considered arithmetic, hierarchical, and compositional “reasoning”. Next, we connected the problem of finding and measuring inductive biases to Solomonoff’s theory of induction and proposed to use a dataset’s description length under a learner as a tool for sensitive measurement of inductive biases.
In our experiments, we found that the seq2seq learners have strikingly different inductive biases and some of them generalize non-trivially when facing ambiguity. For instance, a single training example is sufficient for LSTM-based learners to learn perfectly how to count, to add and to multiply by a constant. Transformers and, to a lesser degree, LSTM-s2s demonstrated preferences for the hierarchical bias, a bias that has been argued to govern children’s acquisition of syntax. Interestingly, such biases arose with no explicit wiring for them. Our results support then Elman et al. (1998)’s theory which states that human’s inductive biases can arise from low-level architectural constraints in the brain with no need for an explicit encoding of a linguistic structure. However, how the brain, or, more generally, a learner is wired to admit a specific inductive bias is still an important open question.
Across our experiments, we also observed that description length is consistent with “intuitive” measurements of inductive biases, and, at the same time, it turned out to be more sensitive. This also indicates that, in the presence of ambiguity in the training data, a learner is more likely to follow the alternative with the shorter description length (i.e. the simplest one) when applied on unseen data, showing consistency with the prescriptions of the theory of induction (Solomonoff, 1964). A similar simplicity preference is argued to play a role in human language acquisition (Perfors et al., 2011).
Our work provides simple tools to investigate learners’ biases. We first show that FPA is an intuitive measure to study biases when provided with simple tasks. Second, we present description length as a robust measure to fairly compare learners’ biases. This metric considers learners’ size and their ease of learning as opposed to accuracy-based metrics. Besides, it is a model- and task-agnostic measure that succeeds in unveiling learners’ biases even when presented with more complex tasks with spurious correlations.
Our findings can guide for architecture selection in the low-data regimes where inductive biases might have a higher influence on model’s generalization performance. Large sparse datasets can also benefit from predictable behavior in few-shot scenarios akin to what we consider.
Finally, our results demonstrate that relatively large deep learning models can generalize non-trivially from as little as one example – as long as the task is aligned with the their inductive biases. We believe this should reinforce interest in future work on injecting useful inductive biases in our learners and, we hope, our findings and setup can provide a fertile ground for such work.
ACKNOWLEDGEMENTS
The authors are grateful to Marco Baroni, Emmanuel Dupoux, Emmanuel Chemla and participants of the EViL seminar for their feedback on our work.
A CAN SEQ2SEQ LEARNERS MULTIPLY BY 3?
In our experiments on the Multiply-or-Add task we saw that LSTM-s2s learners are able to learn to multiply by 2 from a single example. A natural further question is whether these learners can learn to multiply by larger numbers? Here we only provide a preliminary study in a hope to inspire more focused studies.
We build a task that is similar to Multiply-or-Add, but centered around multiplication by 3 instead of 2. The single training example represents a mapping of an input string al to an output string b3l. As test inputs, we use am with m coming from an interval [l − 3, l + 3].
Since 3x can be represented as a combination of addition and multiplication in several ways (3x = x + 2x = 2x + x), we consider 4 different candidate rules. As before, by mem we denote the constant mapping from am → b3l. mul1 represents the mapping am → b2l+m. mul2 corresponds to am → bl+2m and mul3 denotes am → b3m. The “explanation” mul1 is akin to the add rule in the Multiply-or-Add task. We use the same hyperparameters and training procedure described in Section 4.
We report the results in Table 2. Like the results observed in the Multiply-or-Add task, both CNN-s2s and Transformer learners show a strong preference for the mem rule while LSTM-based learners switch their generalization according to the length of the training example l. Indeed, for CNN-s2s and Transformer, we note an FPA-mem>0.97 independently of l (with L-mem significantly lower than others). LSTM-s2s att. learners start inferring the mem rule for l = 5 (FPA=0.64, L=2.44), then switch to comparable preference for mul2 and mul3 when l = 10, and finally show a significant bias toward the mul3 hypothesis for l ∈ {15, 20} (e.g. FPA-mul3=0.76 for l = 15). LSTM-s2s no att. learners are also subject to a similar switch of preference. That is, for l = 5, these learners have a significant bias toward mul1 (FPA=0.49). Strikingly, when l = 10, 92% LSTM-s2s no att. learners inferred perfectly the mul2 rule after one training example. Finally, we observe again another switch to approximate mul3 for l ∈ {15, 20}.
Overall, while CNN-s2s and Transformer learners show a significant and robust bias toward mem, LSTM-based learners generalize differently depending on the training input length. In particular, our results suggest that these learners avoid adding very large integers by switching to the multiplicative explanation in those cases. Answering our initial question, we see that LSTM-based learners can learn to multiply by 3 from a single example.
5Only 21% of learners succeeded in learning the training example in this setting. 6Only 51% of learners succeeded in learning the training example in this setting.
B ROBUSTNESS TO CHANGES IN ARCHITECTURE
In this section, we examine how changing different hyper-parameters affects learners’ preferences for memorization, arithmetic, hierarchical and compositional reasoning. In particular, we vary the number of layers, hidden and embedding sizes of the different learners and test their generalization on Count-or-Memorization, Add-or-Multiply, Hierarchical-or-Linear, and Composition-or-Memorization tasks.
In all the experiments, we fix l = 40 and l = 20 for Count-or-Memorization and Add-or-Multiply respectively, d = 4 for Hierarchical-or-Linear and M = 36 for Composition-or-Memorization. Finally, we keep the same training and evaluation procedure as detailed in Section 4.2 of the main text. However, we use 20 different random seeds instead of 100.
B.1 NUMBER OF HIDDEN LAYERS (NLayer)
We experiment with the standard seq2seq learners described in the main paper and vary the number of layers NLayer ∈ {1, 3, 10}. Results are reported in Table 3.
First, when looking at the interplay between mem and count (Table 3a), we observe that, independently of NLayer , more than 97% of CNN-s2s and Transformer learners inferred perfectly the mem rule (i.e. output b40 for any given input am). Further, if the preference for count decreases with the increase of NLayer , LSTM-s2s att. learners display in all cases a significant bias toward count with a large FPA and an L significantly lower than L-mem . However, we note a decline of the bias toward count when considering LSTM-s2s no att. learners. For NLayer = 1, 100% of the seeds generalize to perfect count, versus 6% for NLayer = 10. Note that this lower preference for count is followed by an increase of preference for mem. However, there is no significant switch of preferences according to L.
Second, we consider the Add-or-Multiply task where we examine the three generalization hypothesis add, mul and mem. Results are reported in Table 3b. Similarly to the previous task, Transformer and CNN-s2s learners are robust to the number of layers change. They perfectly follow the mem rule with FPA-mem=1.00. However, LSTM-based learners show a more complex behavior: If single-layer LSTM-s2s att. and no att. demonstrate a considerable bias for mul (FPA-mul>0.94), this bias fades in favor of add and memo for larger architectures.
Third, in Table 3c, we observe that larger learners are slightly less biased toward hierar and linear. However, we do not observe any switch of preferences. That is, across different Nlayers, CNN-s2s learners prefer the linear rule, whereas Transformers and, to a lesser degree, LSTM-based learners show a significant preference for the hierar rule.
Finally, we do not observe an impact of NLayer on the Composition-or-Memorization task (see Table 3d), demonstrating the robustness of biases w.r.t. learners’ size.
In sum, we note little impact of NLayer on learners’ generalizations. Indeed, LSTM-based learners can learn to perform counting from a single training example, even when experimenting with 10-layer architectures. For most tested NLayer , they favor mul and add over mem. In contrast, Transformer and CNN-s2s perform systematic memorization of the single training example. Furthermore, independently of NLayer , Transformer and LSTM-based learners show a bias toward the hierar hypothesis over the linear one, while CNN-based learners do the opposite. Finally, the compositional experiments further support how inductive biases are barely influenced by the change of the number of layers.
B.2 HIDDEN SIZE (SHidden)
We experimented in the main text with standard seq2seq learners when hidden size SHidden = 512. In this section, we look at the effect of SHidden varying it in {128, 512, 1024}. We report learners performances in Table 4.
First, Table 4a demonstrates how minor effect hidden size has on learners counting/memorization performances. Indeed, for any given SHidden, between 84% and 100% of LSTM-based learners learn perfect counting after only one training example. Similarly, and with even a lower variation, more than 91% of Transformer and CNN-s2s learners memorize the single training example outputting b40 for any given am.
The same observation can be made when studying the interplay between the hierar and linear biases (Table 4c) and comp and mem biases (Table 4d). Concretely, Tables 4c and 4d show that learners’ generalizations are stable across SHidden values with the exception of LSTM-based learners that loose significance for some tested SHidden values. Yet, there is no switch of preference.
Finally, as demonstrated in Table 4b, all Transformer and CNN-s2s learners perform perfect memorization when tested on the Add-or-Multiply task independently of their hidden sizes. Both LSTM-based learners are significantly biased toward mul for SHidden ∈ {512, 1024}. However, when experimenting with smaller
SHidden (=128), we detect a switch of preference for LSTM-s2s no att. learners. The latter start approximating add-type rule (with significantly lower L). Lastly, we do not distinguish any significant difference between add and mul for LSTM-s2s att. when SHidden = 128.
Taken together, learners’ biases are quite robust to SHidden variations. We however note a switch of preference from mul to add for LSTM-s2s no att. learners when decreasing SHidden. Furthermore, we see a loss of significant preference in five distinct | 1. What is the novel contribution of the paper regarding inductive bias testing?
2. What are the strengths and weaknesses of the proposed datasets and tasks?
3. How does the reviewer assess the significance and impact of the presented results?
4. Are there any concerns or questions about the experimental design and methodology?
5. Is there a lack of motivation or relevance to recent works in the field?
6. Can the results be applied to real-world scenarios, and how would they benefit practitioners or researchers? | Review | Review
The paper introduces a series of new datasets and task and investigates the inductive bias of seq2seq models. For each dataset, (at least) two hidden hypothesis could explain the data. The tasks investigated are count-vs-memorization, add-or-multiply, hierarchical-or-linear, composition-or-memorization. The datasets consists of one sample with varying length (amount of input/output pairs), which is denoted as description length. The models are evaluated on accuracy and a logloss. An LSTM, CNN, and Transformer are all trained on these datasets. Multiple seeds are used for significance testing. The results suggests that LSTM is better at counting when provided with a longer sequence, while the CNN and Transformer memorizes the data, but are better at handling hierarchical data. What this paper excels at is a thorough description of their experimental section and their approach to design datasets specifically for testing inductive bias, which I have not previously seen and must thus assume is a novel contribution.
However, I lean to reject this paper for the following reasons
The paper tries to fit into the emerging field of formal language datasets for evaluating the capacity of deep learning methods. However, they do not build on any of the recent papers in the field. A new dataset, especially a synthetic one, should be well motivated by shortcomings of previous datasets and tasks in the field. I find the motivation and related works section lacking in that sense.
We already know that LSTMs can count https://arxiv.org/abs/1906.03648 and that transformer cannot https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00306
It is not clear to me why these results are important? Who will benefit from this analysis? Why are the current AnBnCn and DYCK languages that formal language people work with insufficient?
LSTMs do not have the capacity to perform multiplication. I don’t know why your results suggest otherwise. You would need to incorporate special units that can handle multiplications in the LSTM, such as https://arxiv.org/abs/2001.05016
Update
First I'd like to thank the authors for their detailed rebuttal. I have upgraded my recommendation from 3 to 4. As mentioned in my review I believe this approach is interesting. However, as pointed by reviewer2, the experimental section lacks completeness. I think this experimental section would be suitable for a workshop, but not a conference. I am excited to hear you are considering to use this method as an inspiration for real problems. I'd like to see the paper resubmitted when you have obtained such results. |
ICLR | Title
What they do when in doubt: a study of inductive biases in seq2seq learners
Abstract
Sequence-to-sequence (seq2seq) learners are widely used, but we still have only limited knowledge about what inductive biases shape the way they generalize. We address that by investigating how popular seq2seq learners generalize in tasks that have high ambiguity in the training data. We use four new tasks to study learners’ preferences for memorization, arithmetic, hierarchical, and compositional reasoning. Further, we connect to Solomonoff’s theory of induction and propose to use description length as a principled and sensitive measure of inductive biases. In our experimental study, we find that LSTM-based learners can learn to perform counting, addition, and multiplication by a constant from a single training example. Furthermore, Transformer and LSTM-based learners show a bias toward the hierarchical induction over the linear one, while CNN-based learners prefer the opposite. The latter also show a bias toward a compositional generalization over memorization. Finally, across all our experiments, description length proved to be a sensitive measure of inductive biases.
N/A
Sequence-to-sequence (seq2seq) learners are widely used, but we still have only limited knowledge about what inductive biases shape the way they generalize. We address that by investigating how popular seq2seq learners generalize in tasks that have high ambiguity in the training data. We use four new tasks to study learners’ preferences for memorization, arithmetic, hierarchical, and compositional reasoning. Further, we connect to Solomonoff’s theory of induction and propose to use description length as a principled and sensitive measure of inductive biases. In our experimental study, we find that LSTM-based learners can learn to perform counting, addition, and multiplication by a constant from a single training example. Furthermore, Transformer and LSTM-based learners show a bias toward the hierarchical induction over the linear one, while CNN-based learners prefer the opposite. The latter also show a bias toward a compositional generalization over memorization. Finally, across all our experiments, description length proved to be a sensitive measure of inductive biases.
1 INTRODUCTION
Sequence-to-sequence (seq2seq) learners (Sutskever et al., 2014) demonstrated remarkable performance in machine translation, story generation, and open-domain dialog (Sutskever et al., 2014; Fan et al., 2018; Adiwardana et al., 2020). Yet, these models have been criticized for requiring a tremendous amount of data and being unable to generalize systematically (Dupoux, 2018; Loula et al., 2018; Lake & Baroni, 2017; Bastings et al., 2018). In contrast, humans rely on their inductive biases to generalize from a limited amount of data (Chomsky, 1965; Lake et al., 2019). Due to the centrality of humans’ biases in language learning, several works have studied inductive biases of seq2seq models and connected their poor generalization to the lack of the “right” biases (Lake & Baroni, 2017; Lake et al., 2019).
In this work, we focus on studying inductive biases of seq2seq models. We start from an observation that, generally, multiple explanations can be consistent with a limited training set, each leading to different predictions on unseen data. A learner might prefer one type of explanations over another in a systematic way, as a result of its inductive biases (Ritter et al., 2017; Feinman & Lake, 2018).
To illustrate the setup we work in, consider a quiz-like question: if f(3) maps to 6, what does f(4) map to? The “training” example is consistent with the following answers: 6 (f(x) ≡ 6); 7 (f(x) = x + 3); 8 (f(x) = 2 · x); any number z, since we always can construct a function such that f(3) = 6 and f(4) = z. By analyzing the learner’s output on this new input, we can infer its biases.
This example demonstrates how biases of learners are studied through the lenses of the poverty of the stimulus principle (Chomsky, 1965; 1980): if nothing in the training data indicates that a learner should generalize in a certain way, but it does nonetheless, then this is due to the biases of the learner. Inspired by the work of Zhang et al. (2019) in the image domain, we take this principle to the extreme and study biases of seq2seq learners in the regime of very few training examples, often as little as one. Under this setup, we propose four new synthetic tasks that probe seq2seq learners’ preferences to memorization-, arithmetic-, hierarchical- and compositional-based “reasoning”.
∗Equal contribution.
Next, we connect to the ideas of Solomonoff’s theory of induction (Solomonoff, 1964) and Minimal Description Length (Rissanen, 1978; Grunwald, 2004) and propose to use description length, under a learner, as a principled measure of its inductive biases.
Our experimental study1 shows that the standard seq2seq learners have strikingly different inductive biases. We find that LSTM-based learners can learn non-trivial counting-, multiplication-, and addition-based rules from as little as one example. CNN-based seq2seq learners would prefer linear over hierarchical generalizations, while LSTM-based ones and Transformers would do just the opposite. When investigating the compositional reasoning, description length proved to be a sensitive measure. Equipped with it, we found that CNN-, and, to a lesser degree, LSTM-based learners prefer compositional generalization over memorization when provided with enough composite examples. In turn, Transformers show a strong bias toward memorization.
2 SEARCHING FOR INDUCTIVE BIASES
To formalize the way we look for inductive biases of a learnerM, we consider a training dataset of input/output pairs, T = {xi, yi}ni=1, and a hold-out set of inputs, H = {xi}ki=n+1. W.l.o.g, we assume that there are two candidate “rules” that explain the training data, but do not coincide on the hold-out data: C1(xi) = C2(xi) = yi, 1 ≤ i ≤ n and ∃i : C1(xi) 6= C2(xi), n + 1 ≤ i ≤ k. To compare preferences of a learnerM toward those two rules, we fit the learner on the training data T and then compare its predictions on the hold-out data H to the outputs of the rules. We refer to this approach as “intuitive”. Usually, the measures of similarity between the outputs are task-specific: McCoy et al. (2020) used accuracy of the first term, Zhang et al. (2019) used correlation and MSE, and Lake & Baroni (2017) used accuracy calculated on the entire output sequence.
We too start with an accuracy-based measure. We define the fraction of perfect agreement (FPA) between a learnerM and a candidate generalization rule C as the fraction of seeds that generalize perfectly in agreement with that rule on the hold-out set H . Larger FPA of M is w.r.t. C, more biasedM is toward C. However, FPA does not account for imperfect generalization nor allows direct comparison between two candidate rules when both are dominated by a third candidate rule. Hence, below we propose a principled approach based on the description length.
Description Length and Inductive Biases At the core of the theory of induction (Solomonoff, 1964) is the question of continuation of a finite string that is very similar to our setup. Indeed, we can easily re-formulate our motivating example as a string continuation problem: “3→ 6; 4→”. The solution proposed by Solomonoff (1964) is to select the continuation that admits “the simplest explanation” of the entire string, i.e. that is produced by programs of the shortest length (description length).
Our intuition is that when a continuation is “simple” for a learner, then this learner is biased toward it. We consider a learner M to be biased toward C1 over C2 if the training set and its extension according to C1 has a shorter description length (forM) compared to that of C2. Denoting description length of a dataset D under the learnerM as LM(D), we hypothesise that if LM({C1(xi)}ki=1) < LM({C2(xi)}ki=1), thenM is biased toward C1. Calculating Description Length To find the description length of data under a fixed learner, we use the online (prequential) code (Rissanen, 1978; Grunwald, 2004; Blier & Ollivier, 2018).
The problem of calculating LM(D), D = {xi, yi}ki=1 is considered as a problem of transferring outputs yi one-by-one, in a compressed form, between two parties, Alice (sender) and Bob (receiver). Alice has the entire dataset {xi, yi}, while Bob only has inputs {xi}. Before the transmission starts, both parties agreed on the initialization of the modelM, order of the inputs {x}, random seeds, and the details of the learning procedure. Outputs {yi} are sequences of tokens from a vocabulary V . W.l.o.g. we fix some order over {x}. We assume that, given x, the learnerM produces a probability distribution over the space of the outputs y, pM(y|x).
1Code used in the experiments can be found at https://github.com/facebookresearch/FIND.
The very first output y1 can be sent by using not more than c = |y1| · log |V | nats, using a naïve encoding.2 After that, both Alice and Bob update their learners using the example (x1, y1), available to both of them, and get identical instances ofM1. Further transfer is done iteratively under the invariant that both Alice and Bob start every step t with exactly the same learnersMt−1 and finish with identicalMt. At step t Alice would useMt−1 to encode the next output yt. This can be done using ( − log pMt−1(yt|xt) ) nats (MacKay, 2003). Since Bob has exactly the same model, he can decode the message to obtain yt and use the new pair (xt, yt) to update his model and get Mt. Alice also updates her model, and proceeds to sending the next yt+1 (if any), encoding it with the help ofMt. The cumulative number of nats transmitted is:
LM(D) = − k∑
t=2
log pMt−1(yt|xt) + c. (1)
The obtained code length of Eq. 1 depends on the order in which y are transmitted and the procedure we use to updateM. To account for that, we average out the data order by training with multiple random seeds. Further, for larger datasets, full re-training after adding a new example is impractical and, in such cases, examples can be transmitted in blocks.
If we measure the description length of the training data T shuffled with the hold-out data H , both datasets would have symmetric roles. However, there is a certain asymmetry in the extrapolation problem: we are looking for an extrapolation from T , not vice-versa. To break this symmetry, we always transmit outputs for the entire training data as the first block.
While LM(D) is seemingly different from the “intuitive” measures introduced before, we can illustrate their connection as follows. Consider a case where we first transmit the training outputs as the first block and all of the hold-out data outputs under C, C(H), as the second block. Then the description length is equal to cross-entropy of the trained learner on the hold-out data, recovering a process akin to the “intuitive” measuring of inductive biases. With smaller blocks, the description length also catches whether a learner is capable of finding regularity in the data fast, with few data points; hence it also encodes the speed-of-learning ideas for measuring inductive biases (Chaabouni et al., 2019).
Finally, the description length has three more attractive properties when measuring inductive biases: (a) it is domain-independent, i.e. can be applied for instance in the image domain, (b) it allows comparisons across models that account for model complexity, and (c) it enables direct comparison between two candidate rules (as we will show in the Section 5).
3 TASKS
We describe four tasks that we use to study inductive biases of seq2seq learners. We select those tasks to cover different aspects of learners’ behavior. Each task has a highly ambiguous training set which is consistent with infinite number of generalizations. We pre-select several candidate rules highlighting biases that are useful for language processing and known to exist in humans, or are otherwise reasonable. Our experiments show that these rules cover many cases of the learners’ behavior.
The first two tasks study biases in arithmetic reasoning: Count-or-Memorization quantifies learners’ preference for counting vs. a simple memorization and Add-or-Multiply further probes the learners’ preferences in arithmetic operations. We believe these tasks are interesting, as counting is needed in some NLP problems like processing linearized parse trees (Weiss et al., 2018). The third task, Hierarchical-or-Linear, contrasts hierarchical and linear inductive reasoning. The hierarchical reasoning bias is believed to be fundamental in learning some syntactical rules in human acquisition of syntax (Chomsky, 1965; 1980). Finally, with the Composition-or-Memorization task, we investigate biases for systematic compositionality, which are central for human capabilities in language. Figure 1 illustrates these four tasks.
2As we are interested in comparing candidate generalization rules, the value of the additive constant c is not important, as it is learner- and candidate-independent. In experiments, we subtract it from all measurements.
Published as a conference paper at ICLR 2021
<latexit sha1_base64="2aP1qHe7cCIfZJZf006ToyPFAlM=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimTNNuGZrNLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZRxLBtfG8b6ewtr6xuVXcLu3s7u0fuIdHLR2nirImjUWsOgQ0E1yypuFGsE6iGEREsDYZ38z89gNTmsfy3mQJCyIYSh5yCsZKfbcMALin+HBkQKn4ERNC+m7Fq3pz4FXi56SCcjT67ldvENM0YtJQAVp3fS8xwQSU4VSwaamXapYAHcOQdS2VEDEdTObHT/GpVQY4jJUtafBc/T0xgUjrLCK2MwIz0sveTPzP66YmvAwmXCapYZIuFoWpwCbGsyTwgCtGjcgsAaq4vRXTESigxuZVsiH4yy+vktZ51a9Vr+5qlfp1HkcRHaMTdIZ8dIHq6BY1UBNRlKFn9IrenCfnxXl3PhatBSefKaM/cD5/AGgPlKM=</latexit>
<latexit sha1_base64="zC7LQM7YZYuoBe5SPPwHFlyMIso=">AAACB3icbVDLSsNAFJ34rPUVdSnIYBFchUQq6sqiG5cV7AOaUG6mk2bo5MHMRCmhOzf+ihsXirj1F9z5N07bLLT1wIXDOfdy7z1+yplUtv1tLCwuLa+sltbK6xubW9vmzm5TJpkgtEESnoi2D5JyFtOGYorTdiooRD6nLX9wPfZb91RIlsR3aphSL4J+zAJGQGmpax4AYFewfqhAiOQBu6FMgdDcsk9JNLrsmhXbsifA88QpSAUVqHfNL7eXkCyisSIcpOw4dqq8HIRihNNR2c0k1QsG0KcdTWOIqPTyyR8jfKSVHg4SoStWeKL+nsghknIY+bozAhXKWW8s/ud1MhWcezmL00zRmEwXBRnHKsHjUHCPCUoUH2oCRDB9KyYhCCBKR1fWITizL8+T5onlVK2L22qldlXEUUL76BAdIwedoRq6QXXUQAQ9omf0it6MJ+PFeDc+pq0LRjGzh/7A+PwBSTCY9A==</latexit>
<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>
<latexit sha1_base64="fz5x21CLleIxoxCf0rdZAKFE5ls=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj25nffkKleSwfzSRBP6JDyUPOqLHSQxD0yxW36s5BVomXkwrkaPTLX71BzNIIpWGCat313MT4GVWGM4HTUi/VmFA2pkPsWipphNrP5pdOyZlVBiSMlS1pyFz9PZHRSOtJFNjOiJqRXvZm4n9eNzXhlZ9xmaQGJVssClNBTExmb5MBV8iMmFhCmeL2VsJGVFFmbDglG4K3/PIqaV1UvVr1+r5Wqd/kcRThBE7hHDy4hDrcQQOawCCEZ3iFN2fsvDjvzseiteDkM8fwB87nD4KejV4=</latexit>
<latexit sha1_base64="/YFAJknAn2U8M0GNie5D5uxl4Tw=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimzabYNzSZLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3ds/cA+PWlqmitAmkVyqTgiaciZo0zDDaSdRFOKQ03Y4vpn57QeqNJPi3mQJDWIYChYxAsZKfbcMEALgnmLDkQGl5CMO+27Fq3pz4FXi56SCcjT67ldvIEkaU2EIB627vpeYYALKMMLptNRLNU2AjGFIu5YKiKkOJvPjp/jUKgMcSWVLGDxXf09MINY6i0PbGYMZ6WVvJv7ndVMTXQYTJpLUUEEWi6KUYyPxLAk8YIoSwzNLgChmb8VkBAqIsXmVbAj+8surpHVe9WvVq7tapX6dx1FEx+gEnSEfXaA6ukUN1EQEZegZvaI358l5cd6dj0VrwclnyugPnM8fZqiUog==</latexit>
<latexit sha1_base64="4YyIpL4gWY0aiFz+Nr1J+DsIshc=">AAACCXicbVC7SgNBFJ2Nrxhfq5Y2g0GwWnYlolYGbSwjmAckS7g7mSRDZmaXmVklLGlt/BUbC0Vs/QM7/8bJo9DEAxcO59zLvfdECWfa+P63k1taXlldy68XNja3tnfc3b2ajlNFaJXEPFaNCDTlTNKqYYbTRqIoiIjTejS4Hvv1e6o0i+WdGSY0FNCTrMsIGCu1XQwR4JZivb4BpeIH3OrrBAjNfO9UiBG+bLtF3/MnwIskmJEimqHSdr9anZikgkpDOGjdDPzEhBkowwino0Ir1dRuGECPNi2VIKgOs8knI3xklQ7uxsqWNHii/p7IQGg9FJHtFGD6et4bi/95zdR0z8OMySQ1VJLpom7KsYnxOBbcYYoSw4eWAFHM3opJHxQQY8Mr2BCC+ZcXSe3EC0rexW2pWL6axZFHB+gQHaMAnaEyukEVVEUEPaJn9IrenCfnxXl3PqatOWc2s4/+wPn8AX7kmZQ=</latexit>
<latexit sha1_base64="bihfQSgohUCaiZLH8E/K/puHk0I=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmrRfrrhVdw6ySrycVCBHo1/+6g1ilkYoDRNU667nJsbPqDKcCZyWeqnGhLIxHWLXUkkj1H42P3RKzqwyIGGsbElD5urviYxGWk+iwHZG1Iz0sjcT//O6qQmv/IzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaiijJjsynZELzll1dJ+6Lq1arXzVqlfpPHUYQTOIVz8OAS6nAHDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fxz2M8Q==</latexit>
<latexit sha1_base64="9Y1xYV9hakFu8xV0cEnjrRUGU9E=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmkG/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1Wvm7VK/SaPowgncArn4MEl1OEOGtACBgjP8ApvzqPz4rw7H4vWgpPPHMMfOJ8/yMGM8g==</latexit>
<latexit sha1_base64="9Y1xYV9hakFu8xV0cEnjrRUGU9E=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmkG/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1Wvm7VK/SaPowgncArn4MEl1OEOGtACBgjP8ApvzqPz4rw7H4vWgpPPHMMfOJ8/yMGM8g==</latexit>
<latexit sha1_base64="jnXbe6btQmwXgibKAN2VhSH1B5w=">AAAB+HicbVBNSwMxEM3Wr1o/uurRS7AInsquFNRb0YvHCvYD2qXMptk2NJssSVapS3+JFw+KePWnePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3dsvuweHLS1TRWiTSC5VJwRNORO0aZjhtJMoCnHIaTsc38z89gNVmklxbyYJDWIYChYxAsZKfbcMuKfYcGRAKfmIoe9WvKo3B14lfk4qKEej7371BpKkMRWGcNC663uJCTJQhhFOp6VeqmkCZAxD2rVUQEx1kM0Pn+JTqwxwJJUtYfBc/T2RQaz1JA5tZwxmpJe9mfif101NdBlkTCSpoYIsFkUpx0biWQp4wBQlhk8sAaKYvRWTESggxmZVsiH4yy+vktZ51a9Vr+5qlfp1HkcRHaMTdIZ8dIHq6BY1UBMRlKJn9IrenCfnxXl3PhatBSefOUJ/4Hz+AGnJkvQ=</latexit>
<latexit sha1_base64="y/odGoA3GMubqSuaA2hvPYvW6gw=">AAAB+HicbVBNSwMxEM3Wr1o/uurRS7AInsquFNRb0YvHCvYD2qVk02wbmk2WZFapS3+JFw+KePWnePPfmLZ70NYHA4/3ZpiZFyaCG/C8b6ewtr6xuVXcLu3s7u2X3YPDllGppqxJlVC6ExLDBJesCRwE6ySakTgUrB2Ob2Z++4Fpw5W8h0nCgpgMJY84JWClvlsOcU/z4QiI1uoRh3234lW9OfAq8XNSQTkafferN1A0jZkEKogxXd9LIMiIBk4Fm5Z6qWEJoWMyZF1LJYmZCbL54VN8apUBjpS2JQHP1d8TGYmNmcSh7YwJjMyyNxP/87opRJdBxmWSApN0sShKBQaFZyngAdeMgphYQqjm9lZMR0QTCjarkg3BX355lbTOq36tenVXq9Sv8ziK6BidoDPkowtUR7eogZqIohQ9o1f05jw5L86787FoLTj5zBH6A+fzB2zfkvY=</latexit>
<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>
<latexit sha1_base64="dl6D95tdDzd51TNAft5O5SVEngo=">AAACEXicbVC7SgNBFJ31GeMrammzGAQbw64E1C6YxkaIYB6QhDA7uUkG57HM3BXjkl+w8VdsLBSxtbPzb9xsUmjigWEO55zLzD1BKLhFz/t2FhaXlldWM2vZ9Y3Nre3czm7N6sgwqDIttGkE1ILgCqrIUUAjNEBlIKAe3JbHfv0OjOVa3eAwhLakfcV7nFFMpE7uqIVwj0bG6c0xLutI4bE2x1cgteEPaW40ynZyea/gpXDniT8leTJFpZP7anU1iyQoZIJa2/S9ENsxNciZgFG2FVkIKbulfWgmVFEJth2nG43cw0Tpuj1tkqPQTdXfEzGV1g5lkCQlxYGd9cbif14zwt5ZO+YqjBAUmzzUi4SL2h3X43a5AYZimBDKDE/+6rIBNZRhUuK4BH925XlSOyn4xcL5dTFfupjWkSH75IAcEZ+ckhK5JBVSJYw8kmfySt6cJ+fFeXc+JtEFZzqzR/7A+fwBMTKegQ==</latexit>
<latexit sha1_base64="AU3h0Xycm3+2dblNcl19kE3syhg=">AAACF3icbVC7SgNBFJ31GeMrammzGASbhF0JqF0wjY0QwTwgCWF2cpMMmdlZZu6Kcclf2PgrNhaK2Grn37i7SaGJB4Y5nHMul3u8QHCDjvNtLS2vrK6tZzaym1vbO7u5vf26UaFmUGNKKN30qAHBfaghRwHNQAOVnoCGN6okfuMOtOHKv8VxAB1JBz7vc0Yxlrq5YhvhHrWM0p9jVFEyUIYnbkHpwjVIpflDmp5Mst1c3ik6KexF4s5InsxQ7ea+2j3FQgk+MkGNablOgJ2IauRMwCTbDg0ElI3oAFox9akE04nSuyb2caz07L7S8fPRTtXfExGVxoylFyclxaGZ9xLxP68VYv+8E3E/CBF8Nl3UD4WNyk5KsntcA0MxjgllOi6D2WxINWUYV5mU4M6fvEjqp0W3VLy4KeXLl7M6MuSQHJET4pIzUiZXpEpqhJFH8kxeyZv1ZL1Y79bHNLpkzWYOyB9Ynz9YnKFI</latexit>
<latexit sha1_base64="z+j/XVE4K8yKa/1sdHAWANdl7aI=">AAACEnicbVC7TgJBFJ31ifhatbTZSEy0gOwaErUj2lBYYCKPBAiZHS4wYWZ3M3PXSDZ8g42/YmOhMbZWdv6Ns0Ch4Ekmc3LOvXfmHj8SXKPrfltLyyura+uZjezm1vbOrr23X9NhrBhUWShC1fCpBsEDqCJHAY1IAZW+gLo/vE79+j0ozcPgDkcRtCXtB7zHGUUjdezTFsIDKplMbo5JmYOiig1MhciHKn9jBlM1Hmc7ds4tuBM4i8SbkRyZodKxv1rdkMUSAmSCat303AjbCVXImYBxthVriCgb0j40DQ2oBN1OJiuNnWOjdJ1eqMwJ0JmovzsSKrUeSd9USooDPe+l4n9eM8beRTvhQRQjBGz6UC8WDoZOmo/T5QoYipEhlClu/uqwAVWUoUkxDcGbX3mR1M4KXrFweVvMla5mcWTIITkiJ8Qj56REyqRCqoSRR/JMXsmb9WS9WO/Wx7R0yZr1HJA/sD5/AILSnp4=</latexit>
<latexit sha1_base64="Qjee5BVEq+vKsWYHnWbnBQlIpYA=">AAACC3icbVDLSsNAFJ34rPUVdekmtAhuWhIpqLuqGzdCBfuANpTJZNIOnUnCzI0YQvdu/BU3LhRx6w+4829M0iy09cDlHs65l5l7nJAzBab5rS0tr6yurZc2yptb2zu7+t5+RwWRJLRNAh7InoMV5cynbWDAaS+UFAuH064zucr87j2VigX+HcQhtQUe+cxjBEMqDfXKAOgDSJHknUFy4bq1QNZuIg4s5PF0Wh7qVbNu5jAWiVWQKirQGupfAzcgkaA+EI6V6ltmCHaCJTDC6bQ8iBQNMZngEe2n1MeCKjvJb5kaR6niGl4g0/LByNXfGwkWSsXCSScFhrGa9zLxP68fgXdmJ8wPI6A+mT3kRdyAwMiCMVwmKQEepwQTydK/GmSMJSaQxpeFYM2fvEg6J3WrUT+/bVSbl0UcJXSIKugYWegUNdE1aqE2IugRPaNX9KY9aS/au/YxG13Sip0D9Afa5w/83pun</latexit>
<latexit sha1_base64="a4iN37u8WAC9fwuLOaFSCqDHf/w=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimzabYNzSZLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3ds/cA+PWlqmitAmkVyqTgiaciZo0zDDaSdRFOKQ03Y4vpn57QeqNJPi3mQJDWIYChYxAsZKfbcMgHuKDUcGlJKPOLTouxWv6s2BV4mfkwrK0ei7X72BJGlMhSEctO76XmKCCSjDCKfTUi/VNAEyhiHtWiogpjqYzI+f4lOrDHAklS1h8Fz9PTGBWOssDm1nDGakl72Z+J/XTU10GUyYSFJDBVksilKOjcSzJPCAKUoMzywBopi9FZMRKCDG5lWyIfjLL6+S1nnVr1Wv7mqV+nUeRxEdoxN0hnx0geroFjVQExGUoWf0it6cJ+fFeXc+Fq0FJ58poz9wPn8AaY6UpA==</latexit>
<latexit sha1_base64="MdWGCyoEtQk4iPFMQu2DAITPSsU=">AAACBnicbVDLSsNAFJ34rPUVdSnCYBFchUQq6sqiG5cV7AOaUCbTSTN0ZhJmJkoJXbnxV9y4UMSt3+DOv3HaZqGtBy4czrmXe+8JU0aVdt1va2FxaXlltbRWXt/Y3Nq2d3abKskkJg2csES2Q6QIo4I0NNWMtFNJEA8ZaYWD67HfuidS0UTc6WFKAo76gkYUI22krn2AoC9pP9ZIyuQB+rFKESa5455iPrrs2hXXcSeA88QrSAUUqHftL7+X4IwToTFDSnU8N9VBjqSmmJFR2c8UMQsGqE86hgrEiQryyRsjeGSUHowSaUpoOFF/T+SIKzXkoenkSMdq1huL/3mdTEfnQU5Fmmki8HRRlDGoEzjOBPaoJFizoSEIS2puhThGEmFtkiubELzZl+dJ88Txqs7FbbVSuyriKIF9cAiOgQfOQA3cgDpoAAwewTN4BW/Wk/VivVsf09YFq5jZA39gff4AheWYiQ==</latexit>
<latexit sha1_base64="Skuwggtv7n9iurfu6b6dnycPw7E=">AAAB63icbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWsB/QLiWbZtvQJLskWaEs/QtePCji1T/kzX9jtt2Dtj4YeLw3w8y8MBHcWM/7RqW19Y3NrfJ2ZWd3b/+genjUNnGqKWvRWMS6GxLDBFesZbkVrJtoRmQoWCec3OV+54lpw2P1aKcJCyQZKR5xSmwuhQ6Das2re3PgVeIXpAYFmoPqV38Y01QyZakgxvR8L7FBRrTlVLBZpZ8alhA6ISPWc1QRyUyQzW+d4TOnDHEUa1fK4rn6eyIj0pipDF2nJHZslr1c/M/rpTa6DjKuktQyRReLolRgG+P8cTzkmlErpo4Qqrm7FdMx0YRaF0/FheAvv7xK2hd1/7J+83BZa9wWcZThBE7hHHy4ggbcQxNaQGEMz/AKb0iiF/SOPhatJVTMHMMfoM8f93WONg==</latexit>
<latexit sha1_base64="fz5x21CLleIxoxCf0rdZAKFE5ls=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj25nffkKleSwfzSRBP6JDyUPOqLHSQxD0yxW36s5BVomXkwrkaPTLX71BzNIIpWGCat313MT4GVWGM4HTUi/VmFA2pkPsWipphNrP5pdOyZlVBiSMlS1pyFz9PZHRSOtJFNjOiJqRXvZm4n9eNzXhlZ9xmaQGJVssClNBTExmb5MBV8iMmFhCmeL2VsJGVFFmbDglG4K3/PIqaV1UvVr1+r5Wqd/kcRThBE7hHDy4hDrcQQOawCCEZ3iFN2fsvDjvzseiteDkM8fwB87nD4KejV4=</latexit>
<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>
<latexit sha1_base64="qnFGHcCsleGiOXNIMFDZUtpqE2A=">AAACHnicbVDJSgNBEO1xN25Rj14ag+DFYcYF9SZ68ahgVEhCqOlUMo3dM0N3jRKGfIkXf8WLB0UET/o3dmIEt9c0vHpVRVW9KFPSUhC8eyOjY+MTk1PTpZnZufmF8uLSuU1zI7AqUpWaywgsKplglSQpvMwMgo4UXkRXR/38xTUaK9PkjLoZNjR0EtmWAshJzfIOxUYKrMc2A4FF4O9o3YOvcCPwt1xcN7ITExiT3nDov2a5EvjBAPwvCYekwoY4aZZf661U5BoTEgqsrYVBRo0CDEmhsFeq5xbdxCvoYM3RBDTaRjE4r8fXnNLi7dS4nxAfqN87CtDWdnXkKjVQbH/n+uJ/uVpO7b1GIZMsJ0zE56B2rjilvO8Vb0mDglTXERBGul25iMGAIOdoyZkQ/j75Lznf9MNtf/90u3JwOLRjiq2wVbbOQrbLDtgxO2FVJtgtu2eP7Mm78x68Z+/ls3TEG/Yssx/w3j4AA+KiZg==</latexit>
<latexit sha1_base64="gbmyyfTl9bxXUBY/8c/CKSNznlI=">AAACKXicbVBNS8NAEN34WetX1KOXxSJ4sSRSUU8WvXhUsCq0pWy202ZxNwm7E6WE/h0v/hUvCop69Y+4TSP49WDhzXszzM4LEikMet6bMzE5NT0zW5orzy8sLi27K6sXJk41hwaPZayvAmZAiggaKFDCVaKBqUDCZXB9PPIvb0AbEUfnOEigrVg/Ej3BGVqp49Yx1IJDKzQJ45B51V2lhsFXuT2uW1r0Q2Rax7f0y/Jzhx523IpX9XLQv8QvSIUUOO24T61uzFMFEXLJjGn6XoLtjGkUXMKw3EoN2A3XrA9NSyOmwLSz/NIh3bRKl/ZibV+ENFe/T2RMGTNQge1UDEPz2xuJ/3nNFHv77UxESYoQ8fGiXiopxnQUG+0KDRzlwBLGtbB/pTxkmnG04ZZtCP7vk/+Si52qX6senNUq9aMijhJZJxtki/hkj9TJCTklDcLJHXkgz+TFuXcenVfnfdw64RQza+QHnI9P8/inCQ==</latexit>
By ak we denote a sequence that contains token a repeated k times. For training, we represent sequences in a standard way: the tokens are one-hot-encoded separately, and we append a special end-of-sequence token to each sequence. Input and output vocabularies are disjoint.
Count-or-Memorization: In this task, we contrast learners’ preferences for counting vs. memorization. We train models to fit a single training example with input al and output bl (i.e., to perform the mapping al → bl) and test it on am with m ∈ [l−10, l+10]. If a learner learns the constant function, outputting bl independently of its inputs, then it follows the mem strategy. On the other hand, if it generalizes to the am → bm mapping, then the learner is biased toward the count strategy. Add-or-Multiply: This task is akin to the motivating example in Section 1. The single training example represents a mapping of an input string al to an output string b2l. As test inputs, we generate am for m in the interval [l − 3, l + 3]. We consider the learned rule to be consistent with mul if for all m, the input/output pairs are consistent with am → b2m. Similarly, if they are consistent with am → bm+l, we say that the learner follows the addition rule, add. Finally, the learner can learn a constant mapping am → b2l for any m. Again, we call this rule mem. Hierarchical-or-Linear: For a fixed depth d, we train learners on four training examples xdyxd → y where x, y ∈ {a, b}.3 Each training example has a nested structure, where d defines its depth. A learner with a hierarchical bias (hierar), would output the middle symbol. We also consider the linear rule (linear) in which the learner outputs the (d+1)th symbol of its input. To probe learners’ biases, we test them on inputs with different depths m ∈ [d− 2, d + 2]. Note that to examine the linear rule (i.e. if the learner outputs the (d+1)th symbol of any test input of depth m), we need m ≥ d2 . Similar to the previous tasks, there is no vocabulary sharing between a model’s inputs and outputs (input and output tokens a and b are different).
Composition-or-Memorization: We take inspiration from SCAN (Lake & Baroni, 2017), a benchmark used for studying systematic generalization of seq2seq learners.4 The input vocabulary has N symbols ai that are one-to-one mapped into N output symbols bi (i.e., ai → bi). In addition, there is a modifier token thrice: when thrice precedes an input symbol ai, the corresponding output is repeated three times: thrice ai → bibibi. We train a learner on all non-compositional examples (ai → bi) and M (M < N ) compositional examples (thrice ai → bibibi). At test time, we feed the learner with the remaining compositional examples (thrice ai, i > M ). If the learner generalizes to the mapping thrice ai → bibibi for i > M , we consider it to be biased toward a compositional reasoning (comp). As an alternative generalization,
3This mapping consist then on four combinations adbad → b; adaad → a; bdabd → a and bdbbd → b. 4In Appendix, we report a study of inductive biases on the SCAN data.
we consider a mapping where all inputs containing ai are mapped into bi: thrice ai → bi (i > M ). We call this generalization memorization (mem).
4 METHODOLOGY
4.1 SEQUENCE-TO-SEQUENCE LEARNERS
We experiment with three standard seq2seq models: LSTM-based seq2seq (LSTM-s2s) (Sutskever et al., 2014), CNN-based seq2seq (CNN-s2s) (Gehring et al., 2017), and Transformer (Vaswani et al., 2017). All share a similar Encoder/Decoder architecture (Sutskever et al., 2014).
LSTM-s2s Both Encoder and Decoder are implemented as LSTM cells (Hochreiter & Schmidhuber, 1997). Encoder encodes its inputs incrementally from left to right. We experiment with architectures without (LSTM-s2s no att.) and with (LSTM-s2s att.) an attention mechanism (Bahdanau et al., 2014). For the first three tasks, both Encoder and Decoder are single-layer LSTM cells with hidden size of 512 and embedding of dimension 16.
CNN-s2s Encoder and Decoder are convolutional networks (LeCun et al., 1990), followed by GLU non-linearities (Dauphin et al., 2017) and an attention layer. To represent positions of input tokens, CNN-s2s uses learned positional embeddings. Encoder and Decoder networks have one layer with 512 filters and a kernel width of 3. We set the embedding size to 16.
Transformer Encoder and Decoder are implemented as a sequence of (self-)attention and feedforward layers. We use sinusoidal position embeddings. Both Encoder and Decoder contain one transformer layer. The attention modules have 8 heads, feed-forward layers have dimension of 512 and the embedding is of dimension 16.
In Appendix we report experiments where we vary hyperparameters of the learners.
4.2 TRAINING AND EVALUATION
For all tasks, we follow the same training procedure. We train with Adam optimizer (Kingma & Ba, 2014) for 3000 epochs. The learning rate starts at 10−5 and increases for the first 1000 warm-up updates till reaching 10−3. We include all available examples in a single batch. We use teacher forcing (Goodfellow et al., 2016) and set the dropout probability to 0.5. For each learner, we perform training and evaluation 100 times, changing random seeds. At generation, to calculate FPA, we select the next token greedily. We use the model implementations from fairseq (Ott et al., 2019).
As discussed in Section 2, when calculating L, we use the training examples as the first transmitted block at t = 1. In Count-or-Memorization and Add-or-Multiply this block contains one example, in Hierarchical-or-Linear it has 4 examples, and in Composition-or-Memorization it has N + M examples. Next, we transmit examples obtained from the candidate rules in a randomized order, by blocks of size 1, 1, 1, and 4 for Count-or-Memorization, Add-or-Multiply, Composition-orMemorization, and Hierarchical-or-Linear respectively. At each step, the learner is re-trained from the same initialization, using the procedure and hyper-parameters as discussed above.
Our training setup is typical for seq2seq training. It does not include any additional pressure towards description length minimization. Description length is solely used as a measure of inductive biases.
5 EXPERIMENTS
Count-or-Memorization We investigate here learners’ biases toward count and mem rules. We provide a single example al → bl as the training set, varying l ∈ {10, 20, 30, 40}. We report the learners’ performances in Table 1a. We observe that, independently of the length of the training example l, CNN-s2s and Transformer learners inferred perfectly the mem rule with FPA-mem > 0.90 (i.e. more than 90% of the random seeds output bl for any given input am).
However, LSTM-based learners demonstrate a more complex behavior. With l = 10, both learners (with and without attention) exhibit a preference for mem. Indeed, while these learners rarely generalize perfectly to any of the hypothesis (0.0 FPA (no att.), 0.2/0.0 FPA for mem/count (att.)), they have significantly lower L-mem. As l increases, LSTM-based learners become more biased
toward count. Surprisingly, for l ≥ 30, most learner instances show sufficiently strong inductive biases to infer perfectly the non-trivial count hypothesis. With l = 40, 99% of random seeds of LSTM-s2s att. and all (100%) of LSTM-s2s no att. seeds generalized perfectly to count.
Further, we see that if L shows similar trends, it has a higher sensitivity. For example, while both LSTM-based learners have a similar FPA with l = 40, L demonstrates that LSTM-s2s no att. has a stronger count bias.
Add-or-Multiply In this task, we examine learners’ generalization after training on the single example al → b2l. We vary l ∈ {5, 10, 15, 20}. In Table 1b, we report FPA and L for the three generalization hypotheses, add, mul, and mem. We observe, similarly to the previous task, that CNN-s2s and Transformer learners always converge perfectly to memorization.
In contrast, LSTM-based learners show non-trivial generalizations. Examining first LSTM-s2s att., when l=5, we note that mem has a high FPA and an L considerably lower than others. This is consistent with the learner’s behavior in the Count-or-Memorization task. As we increase l, more interesting behavior emerges. First, L-mem decreases as l increases. Second, mul-type preference increases with l. Finally, L-add presents a U-shaped function of l. That is, for the medium example length l, the majority of learners switch to approximating the add rule (for l = 10). However, when l grows further, a considerable fraction of these learners start to follow a mul-type rule. Strikingly, 98% of LSTM-s2s att. seeds generalized perfectly to the non-trivial mul rule. As for LSTM-s2s no att., we do not observe a strong bias to infer any of the rules when l=5. However, when increasing l, the LSTM-s2s no att. behaves similarly to LSTM-s2s att.: at first it has a preference for add (FPA-add=0.95, for l=10) then for mul (e.g. FPA-mul=0.94, for l=20).
Hierarchical-or-Linear We look now at learners’ preference for either hierar or linear generalizations. The architectures we use were only able to consistently learn the training examples with the depth d not higher than 4. Hence, in this experiment, we set d to 4.
We report in Table 1c the learners’ FPA and L. We observe that CNN-s2s exhibits a strikingly different bias compared to all other learners with a perfect agreement with the linear rule. In contrast, Transformer learners show a clear preference for hierar with a high FPA (0.69) and a low L (1.21). Surprisingly, this preference increases with the embedding size and Transformers with embedding size ≥ 64 admit an FPA-hierar of 1.00 (see Appendix for more details). LSTM-s2s att. learners demonstrate also a similar preference for hierar with an FPA of 0.30 and a considerably lower L than L-hierar. Finally, while only 5% of LSTM-s2s no att. instances generalized to perfect hierar (and none to linear), L confirms their preference for the hierar hypothesis.
Composition-or-Memorization In this task, we set the number of primitives N = 40 and vary the number of compositional examples M ∈ {6, 24, 36} seen during training. Results are reported in Table 1d. First, we observe that FPA is only informative for CNN-s2s when trained with a large M . Indeed, for M = 6, CNN-s2s does not infer any of the candidate rules. However, according to description length L, we note a significant preference for mem over comp. More compositional examples CNN-based learners see at training, more biased they become toward comp. The remaining learners have zero FPA for both candidate rules. However, according to description length, LSTMbased learners have preferences similar to CNN-s2s, although weaker. That is, they show a preference for mem for low M , that declines in favor of comp as M increases. In contrast, Transformers show a strong bias for mem with all tested M .
Overall, across all the above experiments, we see that seq2seq learners demonstrate strikingly different biases. In many cases, these biases lead to non-trivial generalizations when facing ambiguity in the training data. This spans tasks that probe for memorization, arithmetic, hierarchical, and compositional reasoning. We found that a single example is sufficient for LSTM-based learners to learn counting, addition, and multiplication. Moreover, within the same task, they can switch from one explanation to another, depending on the training example length, with Addition-or-Multiplication being the task where this switch happens twice. In contrast, CNN-s2s and Transformers show a strong bias toward memorization. Furthermore, all learners except for CNN-s2s demonstrate a strong bias toward the hierarchical behavior. In the task of compositional generalization, CNN-s2s shows a strong bias toward compositional reasoning that appears after a few compositional training examples. On the other hand, Transformers show a preference for memorization over compositional generalization.
We see that the conclusions derived from comparing the description length of the candidate rules are in agreement with the results under accuracy-based metrics, but provide a more nuanced picture.
Robustness to hyper-parameters We observe that learners’ biases depend, in many cases, on the length/number of the input examples. In Appendix, we examine impact of other hyper-parameters. In particular, we study impact of (1) learners’ architecture size, by varying the number of layers, hidden and embedding sizes, and (2) dropout probability. Our results show that in some cases a learner’s architecture size can influence the strength of inductive biases, but rarely modify them: among the 136 tested settings, we observe only 3 cases of switching the preferences. We also found, in line with (Arpit et al., 2017), that large dropout probabilities can prevent mem-type generalization. Finally, in Appendix we show that a variant of Transformer learners, namely the joint source-target self-attention learner (He et al., 2018; Fonollosa et al., 2019), displays the same preferences as the standard Transformer learners. This variant resembles the “decoder-only” architecture used in language modeling (Radford et al., 2019; Brown et al., 2020). This result demonstrates that our tasks and bias measures could be applied for studying inductive biases of language model architectures.
6 RELATED WORK
Dessì & Baroni (2019) found that, unlike LSTM-s2s learners, CNN-s2s can perform compositional generalization on SCAN. Our experiments indicate that this only happens when enough compositional examples are provided in the training. Moreover, in such a case, attention-enabled LSTM-s2s also start to prefer compositional generalization over memorization.
McCoy et al. (2020) studied inductive biases of recurrent neural architectures in two synthetic tasks, English question formation and tense inflection. They found that only tree-based architectures show robust preference for hierarchical reasoning, in contrast to LSTM-s2s learners that generalized linearly. Our experiments on the hyperparameter robustness, reported in Appendix, indicate that the preferences over linear/hierarchical reasoning are strongly affected by the dropout probability, with learners shifting to linear behavior with low probabilities. As McCoy et al. (2020) experimented with a low dropout probability of 0.1, we believe this explains the misalignment of the conclusions.
Overall, our study shows that inductive biases are more complicated than it seemed in these prior works and a more careful analysis is crucial. We believe that our extremely controlled setup with very little confounds is a good addition to those studies.
Another line of research investigates theoretically learners’ capabilities, that is, the classes of the hypothesis that a learner can discover (Siegelmann & Sontag, 1992; Suzgun et al., 2019; Merrill et al., 2020). For example, Weiss et al. (2018) demonstrated that LSTM cells can count. In turn, we demonstrate that LSTM-s2s learners are not only capable but also biased toward arithmetic behavior.
7 DISCUSSION AND CONCLUSION
In this work, we studied inductive biases of standard seq2seq learners, Transformer-, LSTM-, and CNN-based. To do so, we introduced four new tasks, which allowed us to cover an interesting spectrum of behaviors useful for language learning. In particular, we considered arithmetic, hierarchical, and compositional “reasoning”. Next, we connected the problem of finding and measuring inductive biases to Solomonoff’s theory of induction and proposed to use a dataset’s description length under a learner as a tool for sensitive measurement of inductive biases.
In our experiments, we found that the seq2seq learners have strikingly different inductive biases and some of them generalize non-trivially when facing ambiguity. For instance, a single training example is sufficient for LSTM-based learners to learn perfectly how to count, to add and to multiply by a constant. Transformers and, to a lesser degree, LSTM-s2s demonstrated preferences for the hierarchical bias, a bias that has been argued to govern children’s acquisition of syntax. Interestingly, such biases arose with no explicit wiring for them. Our results support then Elman et al. (1998)’s theory which states that human’s inductive biases can arise from low-level architectural constraints in the brain with no need for an explicit encoding of a linguistic structure. However, how the brain, or, more generally, a learner is wired to admit a specific inductive bias is still an important open question.
Across our experiments, we also observed that description length is consistent with “intuitive” measurements of inductive biases, and, at the same time, it turned out to be more sensitive. This also indicates that, in the presence of ambiguity in the training data, a learner is more likely to follow the alternative with the shorter description length (i.e. the simplest one) when applied on unseen data, showing consistency with the prescriptions of the theory of induction (Solomonoff, 1964). A similar simplicity preference is argued to play a role in human language acquisition (Perfors et al., 2011).
Our work provides simple tools to investigate learners’ biases. We first show that FPA is an intuitive measure to study biases when provided with simple tasks. Second, we present description length as a robust measure to fairly compare learners’ biases. This metric considers learners’ size and their ease of learning as opposed to accuracy-based metrics. Besides, it is a model- and task-agnostic measure that succeeds in unveiling learners’ biases even when presented with more complex tasks with spurious correlations.
Our findings can guide for architecture selection in the low-data regimes where inductive biases might have a higher influence on model’s generalization performance. Large sparse datasets can also benefit from predictable behavior in few-shot scenarios akin to what we consider.
Finally, our results demonstrate that relatively large deep learning models can generalize non-trivially from as little as one example – as long as the task is aligned with the their inductive biases. We believe this should reinforce interest in future work on injecting useful inductive biases in our learners and, we hope, our findings and setup can provide a fertile ground for such work.
ACKNOWLEDGEMENTS
The authors are grateful to Marco Baroni, Emmanuel Dupoux, Emmanuel Chemla and participants of the EViL seminar for their feedback on our work.
A CAN SEQ2SEQ LEARNERS MULTIPLY BY 3?
In our experiments on the Multiply-or-Add task we saw that LSTM-s2s learners are able to learn to multiply by 2 from a single example. A natural further question is whether these learners can learn to multiply by larger numbers? Here we only provide a preliminary study in a hope to inspire more focused studies.
We build a task that is similar to Multiply-or-Add, but centered around multiplication by 3 instead of 2. The single training example represents a mapping of an input string al to an output string b3l. As test inputs, we use am with m coming from an interval [l − 3, l + 3].
Since 3x can be represented as a combination of addition and multiplication in several ways (3x = x + 2x = 2x + x), we consider 4 different candidate rules. As before, by mem we denote the constant mapping from am → b3l. mul1 represents the mapping am → b2l+m. mul2 corresponds to am → bl+2m and mul3 denotes am → b3m. The “explanation” mul1 is akin to the add rule in the Multiply-or-Add task. We use the same hyperparameters and training procedure described in Section 4.
We report the results in Table 2. Like the results observed in the Multiply-or-Add task, both CNN-s2s and Transformer learners show a strong preference for the mem rule while LSTM-based learners switch their generalization according to the length of the training example l. Indeed, for CNN-s2s and Transformer, we note an FPA-mem>0.97 independently of l (with L-mem significantly lower than others). LSTM-s2s att. learners start inferring the mem rule for l = 5 (FPA=0.64, L=2.44), then switch to comparable preference for mul2 and mul3 when l = 10, and finally show a significant bias toward the mul3 hypothesis for l ∈ {15, 20} (e.g. FPA-mul3=0.76 for l = 15). LSTM-s2s no att. learners are also subject to a similar switch of preference. That is, for l = 5, these learners have a significant bias toward mul1 (FPA=0.49). Strikingly, when l = 10, 92% LSTM-s2s no att. learners inferred perfectly the mul2 rule after one training example. Finally, we observe again another switch to approximate mul3 for l ∈ {15, 20}.
Overall, while CNN-s2s and Transformer learners show a significant and robust bias toward mem, LSTM-based learners generalize differently depending on the training input length. In particular, our results suggest that these learners avoid adding very large integers by switching to the multiplicative explanation in those cases. Answering our initial question, we see that LSTM-based learners can learn to multiply by 3 from a single example.
5Only 21% of learners succeeded in learning the training example in this setting. 6Only 51% of learners succeeded in learning the training example in this setting.
B ROBUSTNESS TO CHANGES IN ARCHITECTURE
In this section, we examine how changing different hyper-parameters affects learners’ preferences for memorization, arithmetic, hierarchical and compositional reasoning. In particular, we vary the number of layers, hidden and embedding sizes of the different learners and test their generalization on Count-or-Memorization, Add-or-Multiply, Hierarchical-or-Linear, and Composition-or-Memorization tasks.
In all the experiments, we fix l = 40 and l = 20 for Count-or-Memorization and Add-or-Multiply respectively, d = 4 for Hierarchical-or-Linear and M = 36 for Composition-or-Memorization. Finally, we keep the same training and evaluation procedure as detailed in Section 4.2 of the main text. However, we use 20 different random seeds instead of 100.
B.1 NUMBER OF HIDDEN LAYERS (NLayer)
We experiment with the standard seq2seq learners described in the main paper and vary the number of layers NLayer ∈ {1, 3, 10}. Results are reported in Table 3.
First, when looking at the interplay between mem and count (Table 3a), we observe that, independently of NLayer , more than 97% of CNN-s2s and Transformer learners inferred perfectly the mem rule (i.e. output b40 for any given input am). Further, if the preference for count decreases with the increase of NLayer , LSTM-s2s att. learners display in all cases a significant bias toward count with a large FPA and an L significantly lower than L-mem . However, we note a decline of the bias toward count when considering LSTM-s2s no att. learners. For NLayer = 1, 100% of the seeds generalize to perfect count, versus 6% for NLayer = 10. Note that this lower preference for count is followed by an increase of preference for mem. However, there is no significant switch of preferences according to L.
Second, we consider the Add-or-Multiply task where we examine the three generalization hypothesis add, mul and mem. Results are reported in Table 3b. Similarly to the previous task, Transformer and CNN-s2s learners are robust to the number of layers change. They perfectly follow the mem rule with FPA-mem=1.00. However, LSTM-based learners show a more complex behavior: If single-layer LSTM-s2s att. and no att. demonstrate a considerable bias for mul (FPA-mul>0.94), this bias fades in favor of add and memo for larger architectures.
Third, in Table 3c, we observe that larger learners are slightly less biased toward hierar and linear. However, we do not observe any switch of preferences. That is, across different Nlayers, CNN-s2s learners prefer the linear rule, whereas Transformers and, to a lesser degree, LSTM-based learners show a significant preference for the hierar rule.
Finally, we do not observe an impact of NLayer on the Composition-or-Memorization task (see Table 3d), demonstrating the robustness of biases w.r.t. learners’ size.
In sum, we note little impact of NLayer on learners’ generalizations. Indeed, LSTM-based learners can learn to perform counting from a single training example, even when experimenting with 10-layer architectures. For most tested NLayer , they favor mul and add over mem. In contrast, Transformer and CNN-s2s perform systematic memorization of the single training example. Furthermore, independently of NLayer , Transformer and LSTM-based learners show a bias toward the hierar hypothesis over the linear one, while CNN-based learners do the opposite. Finally, the compositional experiments further support how inductive biases are barely influenced by the change of the number of layers.
B.2 HIDDEN SIZE (SHidden)
We experimented in the main text with standard seq2seq learners when hidden size SHidden = 512. In this section, we look at the effect of SHidden varying it in {128, 512, 1024}. We report learners performances in Table 4.
First, Table 4a demonstrates how minor effect hidden size has on learners counting/memorization performances. Indeed, for any given SHidden, between 84% and 100% of LSTM-based learners learn perfect counting after only one training example. Similarly, and with even a lower variation, more than 91% of Transformer and CNN-s2s learners memorize the single training example outputting b40 for any given am.
The same observation can be made when studying the interplay between the hierar and linear biases (Table 4c) and comp and mem biases (Table 4d). Concretely, Tables 4c and 4d show that learners’ generalizations are stable across SHidden values with the exception of LSTM-based learners that loose significance for some tested SHidden values. Yet, there is no switch of preference.
Finally, as demonstrated in Table 4b, all Transformer and CNN-s2s learners perform perfect memorization when tested on the Add-or-Multiply task independently of their hidden sizes. Both LSTM-based learners are significantly biased toward mul for SHidden ∈ {512, 1024}. However, when experimenting with smaller
SHidden (=128), we detect a switch of preference for LSTM-s2s no att. learners. The latter start approximating add-type rule (with significantly lower L). Lastly, we do not distinguish any significant difference between add and mul for LSTM-s2s att. when SHidden = 128.
Taken together, learners’ biases are quite robust to SHidden variations. We however note a switch of preference from mul to add for LSTM-s2s no att. learners when decreasing SHidden. Furthermore, we see a loss of significant preference in five distinct | 1. What is the focus of the paper regarding neural network architectures?
2. What are the strengths and weaknesses of the proposed approach in evaluating inductive biases?
3. How does the paper contribute to understanding the complexity of inductive biases?
4. What are some suggestions for improving the paper's impact regarding practical insights, fair comparisons, or identifying challenges?
5. How does the rebuttal address the concerns raised by Reviewer 5 regarding contextualization in related research? | Review | Review
The Topic of this paper, to investigate the inductive biases of different neural network architectures, is super interesting. They do this by considering the extreme case of when there is only one training example and define a set of biases based on the type of solutions the model converge to when all of the options are equally optimal based on the given training example.
The authors claim that minimum description length is a sensitive measure of inductive biases. As far as In understood the idea is that a solution (rule) can be simple for one learner, while more complicated for the other one, based on their architectural differences. And by measuring the minimum description length of data generate by a certain rule under a specific learner, we can tell how simple the rule is for the learner. I am a bit puzzled by the way this claim is phrased (I don't have anything against the main argument though). Isn't minimum description length itself an inductive bias that is probably applied on all these models? The way I see this is that a model that achieves lower description length is finding a simpler solution and the simplicity of the solution can be characterised by the different solution preferences that is defined in this paper (e.g. memoization vs counting). I would really appreciate if the authors provide a clear distinction between different sources of the inductive biases and their interactions in their experiments.
A nice point about the paper is that in their experiments they evaluate the sensitivity of the preferences of different architectures to their hyper-parameters and it seems these preferences are consistent for most cases, but not surprisingly they observe that hyper-parameters like dropout rate can affects the generalisation behaviour of the models. That would be much nicer, if there were a bit more connections in the paper, about how these biases in the extreme case of only one training example cascade and affect the performance of the models when trained on larger scale datasets. For example, in a simple and still abstract case of gradually increasing the number of training examples, how the behaviour of these models change? Or in case of the point about few shot learning that is mentioned in the conclusion, I think it would be nice to have some experiments to show that the models will have these inductive biases and preferences even after being pre-trained on large datasets.
In the end I agree with the authors that Overall, our study shows that the inductive biases is more complicated than it seemed in these prior works and a more careful analysis is crucial.
And while I think quantifying inductive biases is a super interesting, challenging and important problem to address, this paper needs a bit more work to be more impactful in terms of either providing us with practical insights about the inductive biases of different architectures, or providing us with benchmark or tools to be able to fairly compare the inductive biases of different models, or just by more clearly identifying the main challenges for doing this in a beneficial way (e.g., because of the complicated interactions between different sources of inductive biases and how this all changes when the amount of data and the capacity of the models increase/decrease).
Post Rebuttal: I like this paper and would vote for it to get accepted on the merits of: (1) Their finding about how MDL can be an indicator of inductive biases of the models; (2) introducing an experimental framework for studying inductive biases of the models. I'd also like to appreciate the authors' efforts to address reviewers concerns in the rebuttal. I agree with reviewer #5, that the paper can be better contextualised in the related research area but I think the paper is improved from this aspect a little bit during rebuttal and this is something that in general can potentially be fixed for the camera ready version. |
ICLR | Title
What they do when in doubt: a study of inductive biases in seq2seq learners
Abstract
Sequence-to-sequence (seq2seq) learners are widely used, but we still have only limited knowledge about what inductive biases shape the way they generalize. We address that by investigating how popular seq2seq learners generalize in tasks that have high ambiguity in the training data. We use four new tasks to study learners’ preferences for memorization, arithmetic, hierarchical, and compositional reasoning. Further, we connect to Solomonoff’s theory of induction and propose to use description length as a principled and sensitive measure of inductive biases. In our experimental study, we find that LSTM-based learners can learn to perform counting, addition, and multiplication by a constant from a single training example. Furthermore, Transformer and LSTM-based learners show a bias toward the hierarchical induction over the linear one, while CNN-based learners prefer the opposite. The latter also show a bias toward a compositional generalization over memorization. Finally, across all our experiments, description length proved to be a sensitive measure of inductive biases.
N/A
Sequence-to-sequence (seq2seq) learners are widely used, but we still have only limited knowledge about what inductive biases shape the way they generalize. We address that by investigating how popular seq2seq learners generalize in tasks that have high ambiguity in the training data. We use four new tasks to study learners’ preferences for memorization, arithmetic, hierarchical, and compositional reasoning. Further, we connect to Solomonoff’s theory of induction and propose to use description length as a principled and sensitive measure of inductive biases. In our experimental study, we find that LSTM-based learners can learn to perform counting, addition, and multiplication by a constant from a single training example. Furthermore, Transformer and LSTM-based learners show a bias toward the hierarchical induction over the linear one, while CNN-based learners prefer the opposite. The latter also show a bias toward a compositional generalization over memorization. Finally, across all our experiments, description length proved to be a sensitive measure of inductive biases.
1 INTRODUCTION
Sequence-to-sequence (seq2seq) learners (Sutskever et al., 2014) demonstrated remarkable performance in machine translation, story generation, and open-domain dialog (Sutskever et al., 2014; Fan et al., 2018; Adiwardana et al., 2020). Yet, these models have been criticized for requiring a tremendous amount of data and being unable to generalize systematically (Dupoux, 2018; Loula et al., 2018; Lake & Baroni, 2017; Bastings et al., 2018). In contrast, humans rely on their inductive biases to generalize from a limited amount of data (Chomsky, 1965; Lake et al., 2019). Due to the centrality of humans’ biases in language learning, several works have studied inductive biases of seq2seq models and connected their poor generalization to the lack of the “right” biases (Lake & Baroni, 2017; Lake et al., 2019).
In this work, we focus on studying inductive biases of seq2seq models. We start from an observation that, generally, multiple explanations can be consistent with a limited training set, each leading to different predictions on unseen data. A learner might prefer one type of explanations over another in a systematic way, as a result of its inductive biases (Ritter et al., 2017; Feinman & Lake, 2018).
To illustrate the setup we work in, consider a quiz-like question: if f(3) maps to 6, what does f(4) map to? The “training” example is consistent with the following answers: 6 (f(x) ≡ 6); 7 (f(x) = x + 3); 8 (f(x) = 2 · x); any number z, since we always can construct a function such that f(3) = 6 and f(4) = z. By analyzing the learner’s output on this new input, we can infer its biases.
This example demonstrates how biases of learners are studied through the lenses of the poverty of the stimulus principle (Chomsky, 1965; 1980): if nothing in the training data indicates that a learner should generalize in a certain way, but it does nonetheless, then this is due to the biases of the learner. Inspired by the work of Zhang et al. (2019) in the image domain, we take this principle to the extreme and study biases of seq2seq learners in the regime of very few training examples, often as little as one. Under this setup, we propose four new synthetic tasks that probe seq2seq learners’ preferences to memorization-, arithmetic-, hierarchical- and compositional-based “reasoning”.
∗Equal contribution.
Next, we connect to the ideas of Solomonoff’s theory of induction (Solomonoff, 1964) and Minimal Description Length (Rissanen, 1978; Grunwald, 2004) and propose to use description length, under a learner, as a principled measure of its inductive biases.
Our experimental study1 shows that the standard seq2seq learners have strikingly different inductive biases. We find that LSTM-based learners can learn non-trivial counting-, multiplication-, and addition-based rules from as little as one example. CNN-based seq2seq learners would prefer linear over hierarchical generalizations, while LSTM-based ones and Transformers would do just the opposite. When investigating the compositional reasoning, description length proved to be a sensitive measure. Equipped with it, we found that CNN-, and, to a lesser degree, LSTM-based learners prefer compositional generalization over memorization when provided with enough composite examples. In turn, Transformers show a strong bias toward memorization.
2 SEARCHING FOR INDUCTIVE BIASES
To formalize the way we look for inductive biases of a learnerM, we consider a training dataset of input/output pairs, T = {xi, yi}ni=1, and a hold-out set of inputs, H = {xi}ki=n+1. W.l.o.g, we assume that there are two candidate “rules” that explain the training data, but do not coincide on the hold-out data: C1(xi) = C2(xi) = yi, 1 ≤ i ≤ n and ∃i : C1(xi) 6= C2(xi), n + 1 ≤ i ≤ k. To compare preferences of a learnerM toward those two rules, we fit the learner on the training data T and then compare its predictions on the hold-out data H to the outputs of the rules. We refer to this approach as “intuitive”. Usually, the measures of similarity between the outputs are task-specific: McCoy et al. (2020) used accuracy of the first term, Zhang et al. (2019) used correlation and MSE, and Lake & Baroni (2017) used accuracy calculated on the entire output sequence.
We too start with an accuracy-based measure. We define the fraction of perfect agreement (FPA) between a learnerM and a candidate generalization rule C as the fraction of seeds that generalize perfectly in agreement with that rule on the hold-out set H . Larger FPA of M is w.r.t. C, more biasedM is toward C. However, FPA does not account for imperfect generalization nor allows direct comparison between two candidate rules when both are dominated by a third candidate rule. Hence, below we propose a principled approach based on the description length.
Description Length and Inductive Biases At the core of the theory of induction (Solomonoff, 1964) is the question of continuation of a finite string that is very similar to our setup. Indeed, we can easily re-formulate our motivating example as a string continuation problem: “3→ 6; 4→”. The solution proposed by Solomonoff (1964) is to select the continuation that admits “the simplest explanation” of the entire string, i.e. that is produced by programs of the shortest length (description length).
Our intuition is that when a continuation is “simple” for a learner, then this learner is biased toward it. We consider a learner M to be biased toward C1 over C2 if the training set and its extension according to C1 has a shorter description length (forM) compared to that of C2. Denoting description length of a dataset D under the learnerM as LM(D), we hypothesise that if LM({C1(xi)}ki=1) < LM({C2(xi)}ki=1), thenM is biased toward C1. Calculating Description Length To find the description length of data under a fixed learner, we use the online (prequential) code (Rissanen, 1978; Grunwald, 2004; Blier & Ollivier, 2018).
The problem of calculating LM(D), D = {xi, yi}ki=1 is considered as a problem of transferring outputs yi one-by-one, in a compressed form, between two parties, Alice (sender) and Bob (receiver). Alice has the entire dataset {xi, yi}, while Bob only has inputs {xi}. Before the transmission starts, both parties agreed on the initialization of the modelM, order of the inputs {x}, random seeds, and the details of the learning procedure. Outputs {yi} are sequences of tokens from a vocabulary V . W.l.o.g. we fix some order over {x}. We assume that, given x, the learnerM produces a probability distribution over the space of the outputs y, pM(y|x).
1Code used in the experiments can be found at https://github.com/facebookresearch/FIND.
The very first output y1 can be sent by using not more than c = |y1| · log |V | nats, using a naïve encoding.2 After that, both Alice and Bob update their learners using the example (x1, y1), available to both of them, and get identical instances ofM1. Further transfer is done iteratively under the invariant that both Alice and Bob start every step t with exactly the same learnersMt−1 and finish with identicalMt. At step t Alice would useMt−1 to encode the next output yt. This can be done using ( − log pMt−1(yt|xt) ) nats (MacKay, 2003). Since Bob has exactly the same model, he can decode the message to obtain yt and use the new pair (xt, yt) to update his model and get Mt. Alice also updates her model, and proceeds to sending the next yt+1 (if any), encoding it with the help ofMt. The cumulative number of nats transmitted is:
LM(D) = − k∑
t=2
log pMt−1(yt|xt) + c. (1)
The obtained code length of Eq. 1 depends on the order in which y are transmitted and the procedure we use to updateM. To account for that, we average out the data order by training with multiple random seeds. Further, for larger datasets, full re-training after adding a new example is impractical and, in such cases, examples can be transmitted in blocks.
If we measure the description length of the training data T shuffled with the hold-out data H , both datasets would have symmetric roles. However, there is a certain asymmetry in the extrapolation problem: we are looking for an extrapolation from T , not vice-versa. To break this symmetry, we always transmit outputs for the entire training data as the first block.
While LM(D) is seemingly different from the “intuitive” measures introduced before, we can illustrate their connection as follows. Consider a case where we first transmit the training outputs as the first block and all of the hold-out data outputs under C, C(H), as the second block. Then the description length is equal to cross-entropy of the trained learner on the hold-out data, recovering a process akin to the “intuitive” measuring of inductive biases. With smaller blocks, the description length also catches whether a learner is capable of finding regularity in the data fast, with few data points; hence it also encodes the speed-of-learning ideas for measuring inductive biases (Chaabouni et al., 2019).
Finally, the description length has three more attractive properties when measuring inductive biases: (a) it is domain-independent, i.e. can be applied for instance in the image domain, (b) it allows comparisons across models that account for model complexity, and (c) it enables direct comparison between two candidate rules (as we will show in the Section 5).
3 TASKS
We describe four tasks that we use to study inductive biases of seq2seq learners. We select those tasks to cover different aspects of learners’ behavior. Each task has a highly ambiguous training set which is consistent with infinite number of generalizations. We pre-select several candidate rules highlighting biases that are useful for language processing and known to exist in humans, or are otherwise reasonable. Our experiments show that these rules cover many cases of the learners’ behavior.
The first two tasks study biases in arithmetic reasoning: Count-or-Memorization quantifies learners’ preference for counting vs. a simple memorization and Add-or-Multiply further probes the learners’ preferences in arithmetic operations. We believe these tasks are interesting, as counting is needed in some NLP problems like processing linearized parse trees (Weiss et al., 2018). The third task, Hierarchical-or-Linear, contrasts hierarchical and linear inductive reasoning. The hierarchical reasoning bias is believed to be fundamental in learning some syntactical rules in human acquisition of syntax (Chomsky, 1965; 1980). Finally, with the Composition-or-Memorization task, we investigate biases for systematic compositionality, which are central for human capabilities in language. Figure 1 illustrates these four tasks.
2As we are interested in comparing candidate generalization rules, the value of the additive constant c is not important, as it is learner- and candidate-independent. In experiments, we subtract it from all measurements.
Published as a conference paper at ICLR 2021
<latexit sha1_base64="2aP1qHe7cCIfZJZf006ToyPFAlM=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimTNNuGZrNLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZRxLBtfG8b6ewtr6xuVXcLu3s7u0fuIdHLR2nirImjUWsOgQ0E1yypuFGsE6iGEREsDYZ38z89gNTmsfy3mQJCyIYSh5yCsZKfbcMALin+HBkQKn4ERNC+m7Fq3pz4FXi56SCcjT67ldvENM0YtJQAVp3fS8xwQSU4VSwaamXapYAHcOQdS2VEDEdTObHT/GpVQY4jJUtafBc/T0xgUjrLCK2MwIz0sveTPzP66YmvAwmXCapYZIuFoWpwCbGsyTwgCtGjcgsAaq4vRXTESigxuZVsiH4yy+vktZ51a9Vr+5qlfp1HkcRHaMTdIZ8dIHq6BY1UBNRlKFn9IrenCfnxXl3PhatBSefKaM/cD5/AGgPlKM=</latexit>
<latexit sha1_base64="zC7LQM7YZYuoBe5SPPwHFlyMIso=">AAACB3icbVDLSsNAFJ34rPUVdSnIYBFchUQq6sqiG5cV7AOaUG6mk2bo5MHMRCmhOzf+ihsXirj1F9z5N07bLLT1wIXDOfdy7z1+yplUtv1tLCwuLa+sltbK6xubW9vmzm5TJpkgtEESnoi2D5JyFtOGYorTdiooRD6nLX9wPfZb91RIlsR3aphSL4J+zAJGQGmpax4AYFewfqhAiOQBu6FMgdDcsk9JNLrsmhXbsifA88QpSAUVqHfNL7eXkCyisSIcpOw4dqq8HIRihNNR2c0k1QsG0KcdTWOIqPTyyR8jfKSVHg4SoStWeKL+nsghknIY+bozAhXKWW8s/ud1MhWcezmL00zRmEwXBRnHKsHjUHCPCUoUH2oCRDB9KyYhCCBKR1fWITizL8+T5onlVK2L22qldlXEUUL76BAdIwedoRq6QXXUQAQ9omf0it6MJ+PFeDc+pq0LRjGzh/7A+PwBSTCY9A==</latexit>
<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>
<latexit sha1_base64="fz5x21CLleIxoxCf0rdZAKFE5ls=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj25nffkKleSwfzSRBP6JDyUPOqLHSQxD0yxW36s5BVomXkwrkaPTLX71BzNIIpWGCat313MT4GVWGM4HTUi/VmFA2pkPsWipphNrP5pdOyZlVBiSMlS1pyFz9PZHRSOtJFNjOiJqRXvZm4n9eNzXhlZ9xmaQGJVssClNBTExmb5MBV8iMmFhCmeL2VsJGVFFmbDglG4K3/PIqaV1UvVr1+r5Wqd/kcRThBE7hHDy4hDrcQQOawCCEZ3iFN2fsvDjvzseiteDkM8fwB87nD4KejV4=</latexit>
<latexit sha1_base64="/YFAJknAn2U8M0GNie5D5uxl4Tw=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimzabYNzSZLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3ds/cA+PWlqmitAmkVyqTgiaciZo0zDDaSdRFOKQ03Y4vpn57QeqNJPi3mQJDWIYChYxAsZKfbcMEALgnmLDkQGl5CMO+27Fq3pz4FXi56SCcjT67ldvIEkaU2EIB627vpeYYALKMMLptNRLNU2AjGFIu5YKiKkOJvPjp/jUKgMcSWVLGDxXf09MINY6i0PbGYMZ6WVvJv7ndVMTXQYTJpLUUEEWi6KUYyPxLAk8YIoSwzNLgChmb8VkBAqIsXmVbAj+8surpHVe9WvVq7tapX6dx1FEx+gEnSEfXaA6ukUN1EQEZegZvaI358l5cd6dj0VrwclnyugPnM8fZqiUog==</latexit>
<latexit sha1_base64="4YyIpL4gWY0aiFz+Nr1J+DsIshc=">AAACCXicbVC7SgNBFJ2Nrxhfq5Y2g0GwWnYlolYGbSwjmAckS7g7mSRDZmaXmVklLGlt/BUbC0Vs/QM7/8bJo9DEAxcO59zLvfdECWfa+P63k1taXlldy68XNja3tnfc3b2ajlNFaJXEPFaNCDTlTNKqYYbTRqIoiIjTejS4Hvv1e6o0i+WdGSY0FNCTrMsIGCu1XQwR4JZivb4BpeIH3OrrBAjNfO9UiBG+bLtF3/MnwIskmJEimqHSdr9anZikgkpDOGjdDPzEhBkowwino0Ir1dRuGECPNi2VIKgOs8knI3xklQ7uxsqWNHii/p7IQGg9FJHtFGD6et4bi/95zdR0z8OMySQ1VJLpom7KsYnxOBbcYYoSw4eWAFHM3opJHxQQY8Mr2BCC+ZcXSe3EC0rexW2pWL6axZFHB+gQHaMAnaEyukEVVEUEPaJn9IrenCfnxXl3PqatOWc2s4/+wPn8AX7kmZQ=</latexit>
<latexit sha1_base64="bihfQSgohUCaiZLH8E/K/puHk0I=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmrRfrrhVdw6ySrycVCBHo1/+6g1ilkYoDRNU667nJsbPqDKcCZyWeqnGhLIxHWLXUkkj1H42P3RKzqwyIGGsbElD5urviYxGWk+iwHZG1Iz0sjcT//O6qQmv/IzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaiijJjsynZELzll1dJ+6Lq1arXzVqlfpPHUYQTOIVz8OAS6nAHDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fxz2M8Q==</latexit>
<latexit sha1_base64="9Y1xYV9hakFu8xV0cEnjrRUGU9E=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmkG/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1Wvm7VK/SaPowgncArn4MEl1OEOGtACBgjP8ApvzqPz4rw7H4vWgpPPHMMfOJ8/yMGM8g==</latexit>
<latexit sha1_base64="9Y1xYV9hakFu8xV0cEnjrRUGU9E=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmkG/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1Wvm7VK/SaPowgncArn4MEl1OEOGtACBgjP8ApvzqPz4rw7H4vWgpPPHMMfOJ8/yMGM8g==</latexit>
<latexit sha1_base64="jnXbe6btQmwXgibKAN2VhSH1B5w=">AAAB+HicbVBNSwMxEM3Wr1o/uurRS7AInsquFNRb0YvHCvYD2qXMptk2NJssSVapS3+JFw+KePWnePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3dsvuweHLS1TRWiTSC5VJwRNORO0aZjhtJMoCnHIaTsc38z89gNVmklxbyYJDWIYChYxAsZKfbcMuKfYcGRAKfmIoe9WvKo3B14lfk4qKEej7371BpKkMRWGcNC663uJCTJQhhFOp6VeqmkCZAxD2rVUQEx1kM0Pn+JTqwxwJJUtYfBc/T2RQaz1JA5tZwxmpJe9mfif101NdBlkTCSpoYIsFkUpx0biWQp4wBQlhk8sAaKYvRWTESggxmZVsiH4yy+vktZ51a9Vr+5qlfp1HkcRHaMTdIZ8dIHq6BY1UBMRlKJn9IrenCfnxXl3PhatBSefOUJ/4Hz+AGnJkvQ=</latexit>
<latexit sha1_base64="y/odGoA3GMubqSuaA2hvPYvW6gw=">AAAB+HicbVBNSwMxEM3Wr1o/uurRS7AInsquFNRb0YvHCvYD2qVk02wbmk2WZFapS3+JFw+KePWnePPfmLZ70NYHA4/3ZpiZFyaCG/C8b6ewtr6xuVXcLu3s7u2X3YPDllGppqxJlVC6ExLDBJesCRwE6ySakTgUrB2Ob2Z++4Fpw5W8h0nCgpgMJY84JWClvlsOcU/z4QiI1uoRh3234lW9OfAq8XNSQTkafferN1A0jZkEKogxXd9LIMiIBk4Fm5Z6qWEJoWMyZF1LJYmZCbL54VN8apUBjpS2JQHP1d8TGYmNmcSh7YwJjMyyNxP/87opRJdBxmWSApN0sShKBQaFZyngAdeMgphYQqjm9lZMR0QTCjarkg3BX355lbTOq36tenVXq9Sv8ziK6BidoDPkowtUR7eogZqIohQ9o1f05jw5L86787FoLTj5zBH6A+fzB2zfkvY=</latexit>
<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>
<latexit sha1_base64="dl6D95tdDzd51TNAft5O5SVEngo=">AAACEXicbVC7SgNBFJ31GeMrammzGAQbw64E1C6YxkaIYB6QhDA7uUkG57HM3BXjkl+w8VdsLBSxtbPzb9xsUmjigWEO55zLzD1BKLhFz/t2FhaXlldWM2vZ9Y3Nre3czm7N6sgwqDIttGkE1ILgCqrIUUAjNEBlIKAe3JbHfv0OjOVa3eAwhLakfcV7nFFMpE7uqIVwj0bG6c0xLutI4bE2x1cgteEPaW40ynZyea/gpXDniT8leTJFpZP7anU1iyQoZIJa2/S9ENsxNciZgFG2FVkIKbulfWgmVFEJth2nG43cw0Tpuj1tkqPQTdXfEzGV1g5lkCQlxYGd9cbif14zwt5ZO+YqjBAUmzzUi4SL2h3X43a5AYZimBDKDE/+6rIBNZRhUuK4BH925XlSOyn4xcL5dTFfupjWkSH75IAcEZ+ckhK5JBVSJYw8kmfySt6cJ+fFeXc+JtEFZzqzR/7A+fwBMTKegQ==</latexit>
<latexit sha1_base64="AU3h0Xycm3+2dblNcl19kE3syhg=">AAACF3icbVC7SgNBFJ31GeMrammzGASbhF0JqF0wjY0QwTwgCWF2cpMMmdlZZu6Kcclf2PgrNhaK2Grn37i7SaGJB4Y5nHMul3u8QHCDjvNtLS2vrK6tZzaym1vbO7u5vf26UaFmUGNKKN30qAHBfaghRwHNQAOVnoCGN6okfuMOtOHKv8VxAB1JBz7vc0Yxlrq5YhvhHrWM0p9jVFEyUIYnbkHpwjVIpflDmp5Mst1c3ik6KexF4s5InsxQ7ea+2j3FQgk+MkGNablOgJ2IauRMwCTbDg0ElI3oAFox9akE04nSuyb2caz07L7S8fPRTtXfExGVxoylFyclxaGZ9xLxP68VYv+8E3E/CBF8Nl3UD4WNyk5KsntcA0MxjgllOi6D2WxINWUYV5mU4M6fvEjqp0W3VLy4KeXLl7M6MuSQHJET4pIzUiZXpEpqhJFH8kxeyZv1ZL1Y79bHNLpkzWYOyB9Ynz9YnKFI</latexit>
<latexit sha1_base64="z+j/XVE4K8yKa/1sdHAWANdl7aI=">AAACEnicbVC7TgJBFJ31ifhatbTZSEy0gOwaErUj2lBYYCKPBAiZHS4wYWZ3M3PXSDZ8g42/YmOhMbZWdv6Ns0Ch4Ekmc3LOvXfmHj8SXKPrfltLyyura+uZjezm1vbOrr23X9NhrBhUWShC1fCpBsEDqCJHAY1IAZW+gLo/vE79+j0ozcPgDkcRtCXtB7zHGUUjdezTFsIDKplMbo5JmYOiig1MhciHKn9jBlM1Hmc7ds4tuBM4i8SbkRyZodKxv1rdkMUSAmSCat303AjbCVXImYBxthVriCgb0j40DQ2oBN1OJiuNnWOjdJ1eqMwJ0JmovzsSKrUeSd9USooDPe+l4n9eM8beRTvhQRQjBGz6UC8WDoZOmo/T5QoYipEhlClu/uqwAVWUoUkxDcGbX3mR1M4KXrFweVvMla5mcWTIITkiJ8Qj56REyqRCqoSRR/JMXsmb9WS9WO/Wx7R0yZr1HJA/sD5/AILSnp4=</latexit>
<latexit sha1_base64="Qjee5BVEq+vKsWYHnWbnBQlIpYA=">AAACC3icbVDLSsNAFJ34rPUVdekmtAhuWhIpqLuqGzdCBfuANpTJZNIOnUnCzI0YQvdu/BU3LhRx6w+4829M0iy09cDlHs65l5l7nJAzBab5rS0tr6yurZc2yptb2zu7+t5+RwWRJLRNAh7InoMV5cynbWDAaS+UFAuH064zucr87j2VigX+HcQhtQUe+cxjBEMqDfXKAOgDSJHknUFy4bq1QNZuIg4s5PF0Wh7qVbNu5jAWiVWQKirQGupfAzcgkaA+EI6V6ltmCHaCJTDC6bQ8iBQNMZngEe2n1MeCKjvJb5kaR6niGl4g0/LByNXfGwkWSsXCSScFhrGa9zLxP68fgXdmJ8wPI6A+mT3kRdyAwMiCMVwmKQEepwQTydK/GmSMJSaQxpeFYM2fvEg6J3WrUT+/bVSbl0UcJXSIKugYWegUNdE1aqE2IugRPaNX9KY9aS/au/YxG13Sip0D9Afa5w/83pun</latexit>
<latexit sha1_base64="a4iN37u8WAC9fwuLOaFSCqDHf/w=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimzabYNzSZLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3ds/cA+PWlqmitAmkVyqTgiaciZo0zDDaSdRFOKQ03Y4vpn57QeqNJPi3mQJDWIYChYxAsZKfbcMgHuKDUcGlJKPOLTouxWv6s2BV4mfkwrK0ei7X72BJGlMhSEctO76XmKCCSjDCKfTUi/VNAEyhiHtWiogpjqYzI+f4lOrDHAklS1h8Fz9PTGBWOssDm1nDGakl72Z+J/XTU10GUyYSFJDBVksilKOjcSzJPCAKUoMzywBopi9FZMRKCDG5lWyIfjLL6+S1nnVr1Wv7mqV+nUeRxEdoxN0hnx0geroFjVQExGUoWf0it6cJ+fFeXc+Fq0FJ58poz9wPn8AaY6UpA==</latexit>
<latexit sha1_base64="MdWGCyoEtQk4iPFMQu2DAITPSsU=">AAACBnicbVDLSsNAFJ34rPUVdSnCYBFchUQq6sqiG5cV7AOaUCbTSTN0ZhJmJkoJXbnxV9y4UMSt3+DOv3HaZqGtBy4czrmXe+8JU0aVdt1va2FxaXlltbRWXt/Y3Nq2d3abKskkJg2csES2Q6QIo4I0NNWMtFNJEA8ZaYWD67HfuidS0UTc6WFKAo76gkYUI22krn2AoC9pP9ZIyuQB+rFKESa5455iPrrs2hXXcSeA88QrSAUUqHftL7+X4IwToTFDSnU8N9VBjqSmmJFR2c8UMQsGqE86hgrEiQryyRsjeGSUHowSaUpoOFF/T+SIKzXkoenkSMdq1huL/3mdTEfnQU5Fmmki8HRRlDGoEzjOBPaoJFizoSEIS2puhThGEmFtkiubELzZl+dJ88Txqs7FbbVSuyriKIF9cAiOgQfOQA3cgDpoAAwewTN4BW/Wk/VivVsf09YFq5jZA39gff4AheWYiQ==</latexit>
<latexit sha1_base64="Skuwggtv7n9iurfu6b6dnycPw7E=">AAAB63icbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWsB/QLiWbZtvQJLskWaEs/QtePCji1T/kzX9jtt2Dtj4YeLw3w8y8MBHcWM/7RqW19Y3NrfJ2ZWd3b/+genjUNnGqKWvRWMS6GxLDBFesZbkVrJtoRmQoWCec3OV+54lpw2P1aKcJCyQZKR5xSmwuhQ6Das2re3PgVeIXpAYFmoPqV38Y01QyZakgxvR8L7FBRrTlVLBZpZ8alhA6ISPWc1QRyUyQzW+d4TOnDHEUa1fK4rn6eyIj0pipDF2nJHZslr1c/M/rpTa6DjKuktQyRReLolRgG+P8cTzkmlErpo4Qqrm7FdMx0YRaF0/FheAvv7xK2hd1/7J+83BZa9wWcZThBE7hHHy4ggbcQxNaQGEMz/AKb0iiF/SOPhatJVTMHMMfoM8f93WONg==</latexit>
<latexit sha1_base64="fz5x21CLleIxoxCf0rdZAKFE5ls=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj25nffkKleSwfzSRBP6JDyUPOqLHSQxD0yxW36s5BVomXkwrkaPTLX71BzNIIpWGCat313MT4GVWGM4HTUi/VmFA2pkPsWipphNrP5pdOyZlVBiSMlS1pyFz9PZHRSOtJFNjOiJqRXvZm4n9eNzXhlZ9xmaQGJVssClNBTExmb5MBV8iMmFhCmeL2VsJGVFFmbDglG4K3/PIqaV1UvVr1+r5Wqd/kcRThBE7hHDy4hDrcQQOawCCEZ3iFN2fsvDjvzseiteDkM8fwB87nD4KejV4=</latexit>
<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>
<latexit sha1_base64="qnFGHcCsleGiOXNIMFDZUtpqE2A=">AAACHnicbVDJSgNBEO1xN25Rj14ag+DFYcYF9SZ68ahgVEhCqOlUMo3dM0N3jRKGfIkXf8WLB0UET/o3dmIEt9c0vHpVRVW9KFPSUhC8eyOjY+MTk1PTpZnZufmF8uLSuU1zI7AqUpWaywgsKplglSQpvMwMgo4UXkRXR/38xTUaK9PkjLoZNjR0EtmWAshJzfIOxUYKrMc2A4FF4O9o3YOvcCPwt1xcN7ITExiT3nDov2a5EvjBAPwvCYekwoY4aZZf661U5BoTEgqsrYVBRo0CDEmhsFeq5xbdxCvoYM3RBDTaRjE4r8fXnNLi7dS4nxAfqN87CtDWdnXkKjVQbH/n+uJ/uVpO7b1GIZMsJ0zE56B2rjilvO8Vb0mDglTXERBGul25iMGAIOdoyZkQ/j75Lznf9MNtf/90u3JwOLRjiq2wVbbOQrbLDtgxO2FVJtgtu2eP7Mm78x68Z+/ls3TEG/Yssx/w3j4AA+KiZg==</latexit>
<latexit sha1_base64="gbmyyfTl9bxXUBY/8c/CKSNznlI=">AAACKXicbVBNS8NAEN34WetX1KOXxSJ4sSRSUU8WvXhUsCq0pWy202ZxNwm7E6WE/h0v/hUvCop69Y+4TSP49WDhzXszzM4LEikMet6bMzE5NT0zW5orzy8sLi27K6sXJk41hwaPZayvAmZAiggaKFDCVaKBqUDCZXB9PPIvb0AbEUfnOEigrVg/Ej3BGVqp49Yx1IJDKzQJ45B51V2lhsFXuT2uW1r0Q2Rax7f0y/Jzhx523IpX9XLQv8QvSIUUOO24T61uzFMFEXLJjGn6XoLtjGkUXMKw3EoN2A3XrA9NSyOmwLSz/NIh3bRKl/ZibV+ENFe/T2RMGTNQge1UDEPz2xuJ/3nNFHv77UxESYoQ8fGiXiopxnQUG+0KDRzlwBLGtbB/pTxkmnG04ZZtCP7vk/+Si52qX6senNUq9aMijhJZJxtki/hkj9TJCTklDcLJHXkgz+TFuXcenVfnfdw64RQza+QHnI9P8/inCQ==</latexit>
By ak we denote a sequence that contains token a repeated k times. For training, we represent sequences in a standard way: the tokens are one-hot-encoded separately, and we append a special end-of-sequence token to each sequence. Input and output vocabularies are disjoint.
Count-or-Memorization: In this task, we contrast learners’ preferences for counting vs. memorization. We train models to fit a single training example with input al and output bl (i.e., to perform the mapping al → bl) and test it on am with m ∈ [l−10, l+10]. If a learner learns the constant function, outputting bl independently of its inputs, then it follows the mem strategy. On the other hand, if it generalizes to the am → bm mapping, then the learner is biased toward the count strategy. Add-or-Multiply: This task is akin to the motivating example in Section 1. The single training example represents a mapping of an input string al to an output string b2l. As test inputs, we generate am for m in the interval [l − 3, l + 3]. We consider the learned rule to be consistent with mul if for all m, the input/output pairs are consistent with am → b2m. Similarly, if they are consistent with am → bm+l, we say that the learner follows the addition rule, add. Finally, the learner can learn a constant mapping am → b2l for any m. Again, we call this rule mem. Hierarchical-or-Linear: For a fixed depth d, we train learners on four training examples xdyxd → y where x, y ∈ {a, b}.3 Each training example has a nested structure, where d defines its depth. A learner with a hierarchical bias (hierar), would output the middle symbol. We also consider the linear rule (linear) in which the learner outputs the (d+1)th symbol of its input. To probe learners’ biases, we test them on inputs with different depths m ∈ [d− 2, d + 2]. Note that to examine the linear rule (i.e. if the learner outputs the (d+1)th symbol of any test input of depth m), we need m ≥ d2 . Similar to the previous tasks, there is no vocabulary sharing between a model’s inputs and outputs (input and output tokens a and b are different).
Composition-or-Memorization: We take inspiration from SCAN (Lake & Baroni, 2017), a benchmark used for studying systematic generalization of seq2seq learners.4 The input vocabulary has N symbols ai that are one-to-one mapped into N output symbols bi (i.e., ai → bi). In addition, there is a modifier token thrice: when thrice precedes an input symbol ai, the corresponding output is repeated three times: thrice ai → bibibi. We train a learner on all non-compositional examples (ai → bi) and M (M < N ) compositional examples (thrice ai → bibibi). At test time, we feed the learner with the remaining compositional examples (thrice ai, i > M ). If the learner generalizes to the mapping thrice ai → bibibi for i > M , we consider it to be biased toward a compositional reasoning (comp). As an alternative generalization,
3This mapping consist then on four combinations adbad → b; adaad → a; bdabd → a and bdbbd → b. 4In Appendix, we report a study of inductive biases on the SCAN data.
we consider a mapping where all inputs containing ai are mapped into bi: thrice ai → bi (i > M ). We call this generalization memorization (mem).
4 METHODOLOGY
4.1 SEQUENCE-TO-SEQUENCE LEARNERS
We experiment with three standard seq2seq models: LSTM-based seq2seq (LSTM-s2s) (Sutskever et al., 2014), CNN-based seq2seq (CNN-s2s) (Gehring et al., 2017), and Transformer (Vaswani et al., 2017). All share a similar Encoder/Decoder architecture (Sutskever et al., 2014).
LSTM-s2s Both Encoder and Decoder are implemented as LSTM cells (Hochreiter & Schmidhuber, 1997). Encoder encodes its inputs incrementally from left to right. We experiment with architectures without (LSTM-s2s no att.) and with (LSTM-s2s att.) an attention mechanism (Bahdanau et al., 2014). For the first three tasks, both Encoder and Decoder are single-layer LSTM cells with hidden size of 512 and embedding of dimension 16.
CNN-s2s Encoder and Decoder are convolutional networks (LeCun et al., 1990), followed by GLU non-linearities (Dauphin et al., 2017) and an attention layer. To represent positions of input tokens, CNN-s2s uses learned positional embeddings. Encoder and Decoder networks have one layer with 512 filters and a kernel width of 3. We set the embedding size to 16.
Transformer Encoder and Decoder are implemented as a sequence of (self-)attention and feedforward layers. We use sinusoidal position embeddings. Both Encoder and Decoder contain one transformer layer. The attention modules have 8 heads, feed-forward layers have dimension of 512 and the embedding is of dimension 16.
In Appendix we report experiments where we vary hyperparameters of the learners.
4.2 TRAINING AND EVALUATION
For all tasks, we follow the same training procedure. We train with Adam optimizer (Kingma & Ba, 2014) for 3000 epochs. The learning rate starts at 10−5 and increases for the first 1000 warm-up updates till reaching 10−3. We include all available examples in a single batch. We use teacher forcing (Goodfellow et al., 2016) and set the dropout probability to 0.5. For each learner, we perform training and evaluation 100 times, changing random seeds. At generation, to calculate FPA, we select the next token greedily. We use the model implementations from fairseq (Ott et al., 2019).
As discussed in Section 2, when calculating L, we use the training examples as the first transmitted block at t = 1. In Count-or-Memorization and Add-or-Multiply this block contains one example, in Hierarchical-or-Linear it has 4 examples, and in Composition-or-Memorization it has N + M examples. Next, we transmit examples obtained from the candidate rules in a randomized order, by blocks of size 1, 1, 1, and 4 for Count-or-Memorization, Add-or-Multiply, Composition-orMemorization, and Hierarchical-or-Linear respectively. At each step, the learner is re-trained from the same initialization, using the procedure and hyper-parameters as discussed above.
Our training setup is typical for seq2seq training. It does not include any additional pressure towards description length minimization. Description length is solely used as a measure of inductive biases.
5 EXPERIMENTS
Count-or-Memorization We investigate here learners’ biases toward count and mem rules. We provide a single example al → bl as the training set, varying l ∈ {10, 20, 30, 40}. We report the learners’ performances in Table 1a. We observe that, independently of the length of the training example l, CNN-s2s and Transformer learners inferred perfectly the mem rule with FPA-mem > 0.90 (i.e. more than 90% of the random seeds output bl for any given input am).
However, LSTM-based learners demonstrate a more complex behavior. With l = 10, both learners (with and without attention) exhibit a preference for mem. Indeed, while these learners rarely generalize perfectly to any of the hypothesis (0.0 FPA (no att.), 0.2/0.0 FPA for mem/count (att.)), they have significantly lower L-mem. As l increases, LSTM-based learners become more biased
toward count. Surprisingly, for l ≥ 30, most learner instances show sufficiently strong inductive biases to infer perfectly the non-trivial count hypothesis. With l = 40, 99% of random seeds of LSTM-s2s att. and all (100%) of LSTM-s2s no att. seeds generalized perfectly to count.
Further, we see that if L shows similar trends, it has a higher sensitivity. For example, while both LSTM-based learners have a similar FPA with l = 40, L demonstrates that LSTM-s2s no att. has a stronger count bias.
Add-or-Multiply In this task, we examine learners’ generalization after training on the single example al → b2l. We vary l ∈ {5, 10, 15, 20}. In Table 1b, we report FPA and L for the three generalization hypotheses, add, mul, and mem. We observe, similarly to the previous task, that CNN-s2s and Transformer learners always converge perfectly to memorization.
In contrast, LSTM-based learners show non-trivial generalizations. Examining first LSTM-s2s att., when l=5, we note that mem has a high FPA and an L considerably lower than others. This is consistent with the learner’s behavior in the Count-or-Memorization task. As we increase l, more interesting behavior emerges. First, L-mem decreases as l increases. Second, mul-type preference increases with l. Finally, L-add presents a U-shaped function of l. That is, for the medium example length l, the majority of learners switch to approximating the add rule (for l = 10). However, when l grows further, a considerable fraction of these learners start to follow a mul-type rule. Strikingly, 98% of LSTM-s2s att. seeds generalized perfectly to the non-trivial mul rule. As for LSTM-s2s no att., we do not observe a strong bias to infer any of the rules when l=5. However, when increasing l, the LSTM-s2s no att. behaves similarly to LSTM-s2s att.: at first it has a preference for add (FPA-add=0.95, for l=10) then for mul (e.g. FPA-mul=0.94, for l=20).
Hierarchical-or-Linear We look now at learners’ preference for either hierar or linear generalizations. The architectures we use were only able to consistently learn the training examples with the depth d not higher than 4. Hence, in this experiment, we set d to 4.
We report in Table 1c the learners’ FPA and L. We observe that CNN-s2s exhibits a strikingly different bias compared to all other learners with a perfect agreement with the linear rule. In contrast, Transformer learners show a clear preference for hierar with a high FPA (0.69) and a low L (1.21). Surprisingly, this preference increases with the embedding size and Transformers with embedding size ≥ 64 admit an FPA-hierar of 1.00 (see Appendix for more details). LSTM-s2s att. learners demonstrate also a similar preference for hierar with an FPA of 0.30 and a considerably lower L than L-hierar. Finally, while only 5% of LSTM-s2s no att. instances generalized to perfect hierar (and none to linear), L confirms their preference for the hierar hypothesis.
Composition-or-Memorization In this task, we set the number of primitives N = 40 and vary the number of compositional examples M ∈ {6, 24, 36} seen during training. Results are reported in Table 1d. First, we observe that FPA is only informative for CNN-s2s when trained with a large M . Indeed, for M = 6, CNN-s2s does not infer any of the candidate rules. However, according to description length L, we note a significant preference for mem over comp. More compositional examples CNN-based learners see at training, more biased they become toward comp. The remaining learners have zero FPA for both candidate rules. However, according to description length, LSTMbased learners have preferences similar to CNN-s2s, although weaker. That is, they show a preference for mem for low M , that declines in favor of comp as M increases. In contrast, Transformers show a strong bias for mem with all tested M .
Overall, across all the above experiments, we see that seq2seq learners demonstrate strikingly different biases. In many cases, these biases lead to non-trivial generalizations when facing ambiguity in the training data. This spans tasks that probe for memorization, arithmetic, hierarchical, and compositional reasoning. We found that a single example is sufficient for LSTM-based learners to learn counting, addition, and multiplication. Moreover, within the same task, they can switch from one explanation to another, depending on the training example length, with Addition-or-Multiplication being the task where this switch happens twice. In contrast, CNN-s2s and Transformers show a strong bias toward memorization. Furthermore, all learners except for CNN-s2s demonstrate a strong bias toward the hierarchical behavior. In the task of compositional generalization, CNN-s2s shows a strong bias toward compositional reasoning that appears after a few compositional training examples. On the other hand, Transformers show a preference for memorization over compositional generalization.
We see that the conclusions derived from comparing the description length of the candidate rules are in agreement with the results under accuracy-based metrics, but provide a more nuanced picture.
Robustness to hyper-parameters We observe that learners’ biases depend, in many cases, on the length/number of the input examples. In Appendix, we examine impact of other hyper-parameters. In particular, we study impact of (1) learners’ architecture size, by varying the number of layers, hidden and embedding sizes, and (2) dropout probability. Our results show that in some cases a learner’s architecture size can influence the strength of inductive biases, but rarely modify them: among the 136 tested settings, we observe only 3 cases of switching the preferences. We also found, in line with (Arpit et al., 2017), that large dropout probabilities can prevent mem-type generalization. Finally, in Appendix we show that a variant of Transformer learners, namely the joint source-target self-attention learner (He et al., 2018; Fonollosa et al., 2019), displays the same preferences as the standard Transformer learners. This variant resembles the “decoder-only” architecture used in language modeling (Radford et al., 2019; Brown et al., 2020). This result demonstrates that our tasks and bias measures could be applied for studying inductive biases of language model architectures.
6 RELATED WORK
Dessì & Baroni (2019) found that, unlike LSTM-s2s learners, CNN-s2s can perform compositional generalization on SCAN. Our experiments indicate that this only happens when enough compositional examples are provided in the training. Moreover, in such a case, attention-enabled LSTM-s2s also start to prefer compositional generalization over memorization.
McCoy et al. (2020) studied inductive biases of recurrent neural architectures in two synthetic tasks, English question formation and tense inflection. They found that only tree-based architectures show robust preference for hierarchical reasoning, in contrast to LSTM-s2s learners that generalized linearly. Our experiments on the hyperparameter robustness, reported in Appendix, indicate that the preferences over linear/hierarchical reasoning are strongly affected by the dropout probability, with learners shifting to linear behavior with low probabilities. As McCoy et al. (2020) experimented with a low dropout probability of 0.1, we believe this explains the misalignment of the conclusions.
Overall, our study shows that inductive biases are more complicated than it seemed in these prior works and a more careful analysis is crucial. We believe that our extremely controlled setup with very little confounds is a good addition to those studies.
Another line of research investigates theoretically learners’ capabilities, that is, the classes of the hypothesis that a learner can discover (Siegelmann & Sontag, 1992; Suzgun et al., 2019; Merrill et al., 2020). For example, Weiss et al. (2018) demonstrated that LSTM cells can count. In turn, we demonstrate that LSTM-s2s learners are not only capable but also biased toward arithmetic behavior.
7 DISCUSSION AND CONCLUSION
In this work, we studied inductive biases of standard seq2seq learners, Transformer-, LSTM-, and CNN-based. To do so, we introduced four new tasks, which allowed us to cover an interesting spectrum of behaviors useful for language learning. In particular, we considered arithmetic, hierarchical, and compositional “reasoning”. Next, we connected the problem of finding and measuring inductive biases to Solomonoff’s theory of induction and proposed to use a dataset’s description length under a learner as a tool for sensitive measurement of inductive biases.
In our experiments, we found that the seq2seq learners have strikingly different inductive biases and some of them generalize non-trivially when facing ambiguity. For instance, a single training example is sufficient for LSTM-based learners to learn perfectly how to count, to add and to multiply by a constant. Transformers and, to a lesser degree, LSTM-s2s demonstrated preferences for the hierarchical bias, a bias that has been argued to govern children’s acquisition of syntax. Interestingly, such biases arose with no explicit wiring for them. Our results support then Elman et al. (1998)’s theory which states that human’s inductive biases can arise from low-level architectural constraints in the brain with no need for an explicit encoding of a linguistic structure. However, how the brain, or, more generally, a learner is wired to admit a specific inductive bias is still an important open question.
Across our experiments, we also observed that description length is consistent with “intuitive” measurements of inductive biases, and, at the same time, it turned out to be more sensitive. This also indicates that, in the presence of ambiguity in the training data, a learner is more likely to follow the alternative with the shorter description length (i.e. the simplest one) when applied on unseen data, showing consistency with the prescriptions of the theory of induction (Solomonoff, 1964). A similar simplicity preference is argued to play a role in human language acquisition (Perfors et al., 2011).
Our work provides simple tools to investigate learners’ biases. We first show that FPA is an intuitive measure to study biases when provided with simple tasks. Second, we present description length as a robust measure to fairly compare learners’ biases. This metric considers learners’ size and their ease of learning as opposed to accuracy-based metrics. Besides, it is a model- and task-agnostic measure that succeeds in unveiling learners’ biases even when presented with more complex tasks with spurious correlations.
Our findings can guide for architecture selection in the low-data regimes where inductive biases might have a higher influence on model’s generalization performance. Large sparse datasets can also benefit from predictable behavior in few-shot scenarios akin to what we consider.
Finally, our results demonstrate that relatively large deep learning models can generalize non-trivially from as little as one example – as long as the task is aligned with the their inductive biases. We believe this should reinforce interest in future work on injecting useful inductive biases in our learners and, we hope, our findings and setup can provide a fertile ground for such work.
ACKNOWLEDGEMENTS
The authors are grateful to Marco Baroni, Emmanuel Dupoux, Emmanuel Chemla and participants of the EViL seminar for their feedback on our work.
A CAN SEQ2SEQ LEARNERS MULTIPLY BY 3?
In our experiments on the Multiply-or-Add task we saw that LSTM-s2s learners are able to learn to multiply by 2 from a single example. A natural further question is whether these learners can learn to multiply by larger numbers? Here we only provide a preliminary study in a hope to inspire more focused studies.
We build a task that is similar to Multiply-or-Add, but centered around multiplication by 3 instead of 2. The single training example represents a mapping of an input string al to an output string b3l. As test inputs, we use am with m coming from an interval [l − 3, l + 3].
Since 3x can be represented as a combination of addition and multiplication in several ways (3x = x + 2x = 2x + x), we consider 4 different candidate rules. As before, by mem we denote the constant mapping from am → b3l. mul1 represents the mapping am → b2l+m. mul2 corresponds to am → bl+2m and mul3 denotes am → b3m. The “explanation” mul1 is akin to the add rule in the Multiply-or-Add task. We use the same hyperparameters and training procedure described in Section 4.
We report the results in Table 2. Like the results observed in the Multiply-or-Add task, both CNN-s2s and Transformer learners show a strong preference for the mem rule while LSTM-based learners switch their generalization according to the length of the training example l. Indeed, for CNN-s2s and Transformer, we note an FPA-mem>0.97 independently of l (with L-mem significantly lower than others). LSTM-s2s att. learners start inferring the mem rule for l = 5 (FPA=0.64, L=2.44), then switch to comparable preference for mul2 and mul3 when l = 10, and finally show a significant bias toward the mul3 hypothesis for l ∈ {15, 20} (e.g. FPA-mul3=0.76 for l = 15). LSTM-s2s no att. learners are also subject to a similar switch of preference. That is, for l = 5, these learners have a significant bias toward mul1 (FPA=0.49). Strikingly, when l = 10, 92% LSTM-s2s no att. learners inferred perfectly the mul2 rule after one training example. Finally, we observe again another switch to approximate mul3 for l ∈ {15, 20}.
Overall, while CNN-s2s and Transformer learners show a significant and robust bias toward mem, LSTM-based learners generalize differently depending on the training input length. In particular, our results suggest that these learners avoid adding very large integers by switching to the multiplicative explanation in those cases. Answering our initial question, we see that LSTM-based learners can learn to multiply by 3 from a single example.
5Only 21% of learners succeeded in learning the training example in this setting. 6Only 51% of learners succeeded in learning the training example in this setting.
B ROBUSTNESS TO CHANGES IN ARCHITECTURE
In this section, we examine how changing different hyper-parameters affects learners’ preferences for memorization, arithmetic, hierarchical and compositional reasoning. In particular, we vary the number of layers, hidden and embedding sizes of the different learners and test their generalization on Count-or-Memorization, Add-or-Multiply, Hierarchical-or-Linear, and Composition-or-Memorization tasks.
In all the experiments, we fix l = 40 and l = 20 for Count-or-Memorization and Add-or-Multiply respectively, d = 4 for Hierarchical-or-Linear and M = 36 for Composition-or-Memorization. Finally, we keep the same training and evaluation procedure as detailed in Section 4.2 of the main text. However, we use 20 different random seeds instead of 100.
B.1 NUMBER OF HIDDEN LAYERS (NLayer)
We experiment with the standard seq2seq learners described in the main paper and vary the number of layers NLayer ∈ {1, 3, 10}. Results are reported in Table 3.
First, when looking at the interplay between mem and count (Table 3a), we observe that, independently of NLayer , more than 97% of CNN-s2s and Transformer learners inferred perfectly the mem rule (i.e. output b40 for any given input am). Further, if the preference for count decreases with the increase of NLayer , LSTM-s2s att. learners display in all cases a significant bias toward count with a large FPA and an L significantly lower than L-mem . However, we note a decline of the bias toward count when considering LSTM-s2s no att. learners. For NLayer = 1, 100% of the seeds generalize to perfect count, versus 6% for NLayer = 10. Note that this lower preference for count is followed by an increase of preference for mem. However, there is no significant switch of preferences according to L.
Second, we consider the Add-or-Multiply task where we examine the three generalization hypothesis add, mul and mem. Results are reported in Table 3b. Similarly to the previous task, Transformer and CNN-s2s learners are robust to the number of layers change. They perfectly follow the mem rule with FPA-mem=1.00. However, LSTM-based learners show a more complex behavior: If single-layer LSTM-s2s att. and no att. demonstrate a considerable bias for mul (FPA-mul>0.94), this bias fades in favor of add and memo for larger architectures.
Third, in Table 3c, we observe that larger learners are slightly less biased toward hierar and linear. However, we do not observe any switch of preferences. That is, across different Nlayers, CNN-s2s learners prefer the linear rule, whereas Transformers and, to a lesser degree, LSTM-based learners show a significant preference for the hierar rule.
Finally, we do not observe an impact of NLayer on the Composition-or-Memorization task (see Table 3d), demonstrating the robustness of biases w.r.t. learners’ size.
In sum, we note little impact of NLayer on learners’ generalizations. Indeed, LSTM-based learners can learn to perform counting from a single training example, even when experimenting with 10-layer architectures. For most tested NLayer , they favor mul and add over mem. In contrast, Transformer and CNN-s2s perform systematic memorization of the single training example. Furthermore, independently of NLayer , Transformer and LSTM-based learners show a bias toward the hierar hypothesis over the linear one, while CNN-based learners do the opposite. Finally, the compositional experiments further support how inductive biases are barely influenced by the change of the number of layers.
B.2 HIDDEN SIZE (SHidden)
We experimented in the main text with standard seq2seq learners when hidden size SHidden = 512. In this section, we look at the effect of SHidden varying it in {128, 512, 1024}. We report learners performances in Table 4.
First, Table 4a demonstrates how minor effect hidden size has on learners counting/memorization performances. Indeed, for any given SHidden, between 84% and 100% of LSTM-based learners learn perfect counting after only one training example. Similarly, and with even a lower variation, more than 91% of Transformer and CNN-s2s learners memorize the single training example outputting b40 for any given am.
The same observation can be made when studying the interplay between the hierar and linear biases (Table 4c) and comp and mem biases (Table 4d). Concretely, Tables 4c and 4d show that learners’ generalizations are stable across SHidden values with the exception of LSTM-based learners that loose significance for some tested SHidden values. Yet, there is no switch of preference.
Finally, as demonstrated in Table 4b, all Transformer and CNN-s2s learners perform perfect memorization when tested on the Add-or-Multiply task independently of their hidden sizes. Both LSTM-based learners are significantly biased toward mul for SHidden ∈ {512, 1024}. However, when experimenting with smaller
SHidden (=128), we detect a switch of preference for LSTM-s2s no att. learners. The latter start approximating add-type rule (with significantly lower L). Lastly, we do not distinguish any significant difference between add and mul for LSTM-s2s att. when SHidden = 128.
Taken together, learners’ biases are quite robust to SHidden variations. We however note a switch of preference from mul to add for LSTM-s2s no att. learners when decreasing SHidden. Furthermore, we see a loss of significant preference in five distinct | 1. What are the main contributions and findings of the paper regarding inductive biases in seq2seq architectures?
2. What are the strengths of the proposed metric in evaluating memorization and generalization capabilities of models?
3. Can you provide insights into the surprising observation that SGD performs better than adaptive variants in some cases?
4. How does the tendency of large dropouts to hurt memorization impact the performance of transformer models in various tasks?
5. Are there any specific values or combinations of dropout and other hyperparameters that could lead to a counting bias in transformer models? | Review | Review
The paper studies inductive biases that are encoded in three main seq2seq architectures: LSTMs, CNNs, Transformers. Using one existing (fraction of perfect agreement) and one proposed metrics based on description length, and four dichotomy-like tasks, the authors show that, among other things, CNNs and Transformers are strongly biased to memorize training data, while LSTMs behave more nuancedly depending on input length and dropout. Unlike the first metric, the proposed metric takes values in a wider and more fine-grained range; the results are correlated for both of them. I also appreciate the attention to hyperparameter tuning and investigation of their effects in experiments. In general, the manuscript is well written and apart from a few minor questions can be accepted in its present form.
Questions:
SGD was found to often generalize better than its adaptive variants, like Adam (e.g. Wilson et al. The marginal value of adaptive gradient methods in machine learning. In Advances in Neural Information Processing Systems. 2017), yet in your experiments there seems to be an opposite effect of changing the optimizer (Appendix C). Could you comment on why this is the case?
Regarding the tendency of large dropouts to hurt memorization: to what extend does this help the peer bias in a task? It seems that hindering memorization seem to cause a complementary increase in count or add-mul ability (Table 6). Is there a value for dropout (or a combination with other hyperparameters) when Transformers would start showing a counting bias?
Minor:
please use alphabetic literature sorting
UPDATE: I thank the authors for their detailed replies and running additional experiments. This resolves my questions and I'd keep my recommendation to accept the paper. |
ICLR | Title
What they do when in doubt: a study of inductive biases in seq2seq learners
Abstract
Sequence-to-sequence (seq2seq) learners are widely used, but we still have only limited knowledge about what inductive biases shape the way they generalize. We address that by investigating how popular seq2seq learners generalize in tasks that have high ambiguity in the training data. We use four new tasks to study learners’ preferences for memorization, arithmetic, hierarchical, and compositional reasoning. Further, we connect to Solomonoff’s theory of induction and propose to use description length as a principled and sensitive measure of inductive biases. In our experimental study, we find that LSTM-based learners can learn to perform counting, addition, and multiplication by a constant from a single training example. Furthermore, Transformer and LSTM-based learners show a bias toward the hierarchical induction over the linear one, while CNN-based learners prefer the opposite. The latter also show a bias toward a compositional generalization over memorization. Finally, across all our experiments, description length proved to be a sensitive measure of inductive biases.
N/A
Sequence-to-sequence (seq2seq) learners are widely used, but we still have only limited knowledge about what inductive biases shape the way they generalize. We address that by investigating how popular seq2seq learners generalize in tasks that have high ambiguity in the training data. We use four new tasks to study learners’ preferences for memorization, arithmetic, hierarchical, and compositional reasoning. Further, we connect to Solomonoff’s theory of induction and propose to use description length as a principled and sensitive measure of inductive biases. In our experimental study, we find that LSTM-based learners can learn to perform counting, addition, and multiplication by a constant from a single training example. Furthermore, Transformer and LSTM-based learners show a bias toward the hierarchical induction over the linear one, while CNN-based learners prefer the opposite. The latter also show a bias toward a compositional generalization over memorization. Finally, across all our experiments, description length proved to be a sensitive measure of inductive biases.
1 INTRODUCTION
Sequence-to-sequence (seq2seq) learners (Sutskever et al., 2014) demonstrated remarkable performance in machine translation, story generation, and open-domain dialog (Sutskever et al., 2014; Fan et al., 2018; Adiwardana et al., 2020). Yet, these models have been criticized for requiring a tremendous amount of data and being unable to generalize systematically (Dupoux, 2018; Loula et al., 2018; Lake & Baroni, 2017; Bastings et al., 2018). In contrast, humans rely on their inductive biases to generalize from a limited amount of data (Chomsky, 1965; Lake et al., 2019). Due to the centrality of humans’ biases in language learning, several works have studied inductive biases of seq2seq models and connected their poor generalization to the lack of the “right” biases (Lake & Baroni, 2017; Lake et al., 2019).
In this work, we focus on studying inductive biases of seq2seq models. We start from an observation that, generally, multiple explanations can be consistent with a limited training set, each leading to different predictions on unseen data. A learner might prefer one type of explanations over another in a systematic way, as a result of its inductive biases (Ritter et al., 2017; Feinman & Lake, 2018).
To illustrate the setup we work in, consider a quiz-like question: if f(3) maps to 6, what does f(4) map to? The “training” example is consistent with the following answers: 6 (f(x) ≡ 6); 7 (f(x) = x + 3); 8 (f(x) = 2 · x); any number z, since we always can construct a function such that f(3) = 6 and f(4) = z. By analyzing the learner’s output on this new input, we can infer its biases.
This example demonstrates how biases of learners are studied through the lenses of the poverty of the stimulus principle (Chomsky, 1965; 1980): if nothing in the training data indicates that a learner should generalize in a certain way, but it does nonetheless, then this is due to the biases of the learner. Inspired by the work of Zhang et al. (2019) in the image domain, we take this principle to the extreme and study biases of seq2seq learners in the regime of very few training examples, often as little as one. Under this setup, we propose four new synthetic tasks that probe seq2seq learners’ preferences to memorization-, arithmetic-, hierarchical- and compositional-based “reasoning”.
∗Equal contribution.
Next, we connect to the ideas of Solomonoff’s theory of induction (Solomonoff, 1964) and Minimal Description Length (Rissanen, 1978; Grunwald, 2004) and propose to use description length, under a learner, as a principled measure of its inductive biases.
Our experimental study1 shows that the standard seq2seq learners have strikingly different inductive biases. We find that LSTM-based learners can learn non-trivial counting-, multiplication-, and addition-based rules from as little as one example. CNN-based seq2seq learners would prefer linear over hierarchical generalizations, while LSTM-based ones and Transformers would do just the opposite. When investigating the compositional reasoning, description length proved to be a sensitive measure. Equipped with it, we found that CNN-, and, to a lesser degree, LSTM-based learners prefer compositional generalization over memorization when provided with enough composite examples. In turn, Transformers show a strong bias toward memorization.
2 SEARCHING FOR INDUCTIVE BIASES
To formalize the way we look for inductive biases of a learnerM, we consider a training dataset of input/output pairs, T = {xi, yi}ni=1, and a hold-out set of inputs, H = {xi}ki=n+1. W.l.o.g, we assume that there are two candidate “rules” that explain the training data, but do not coincide on the hold-out data: C1(xi) = C2(xi) = yi, 1 ≤ i ≤ n and ∃i : C1(xi) 6= C2(xi), n + 1 ≤ i ≤ k. To compare preferences of a learnerM toward those two rules, we fit the learner on the training data T and then compare its predictions on the hold-out data H to the outputs of the rules. We refer to this approach as “intuitive”. Usually, the measures of similarity between the outputs are task-specific: McCoy et al. (2020) used accuracy of the first term, Zhang et al. (2019) used correlation and MSE, and Lake & Baroni (2017) used accuracy calculated on the entire output sequence.
We too start with an accuracy-based measure. We define the fraction of perfect agreement (FPA) between a learnerM and a candidate generalization rule C as the fraction of seeds that generalize perfectly in agreement with that rule on the hold-out set H . Larger FPA of M is w.r.t. C, more biasedM is toward C. However, FPA does not account for imperfect generalization nor allows direct comparison between two candidate rules when both are dominated by a third candidate rule. Hence, below we propose a principled approach based on the description length.
Description Length and Inductive Biases At the core of the theory of induction (Solomonoff, 1964) is the question of continuation of a finite string that is very similar to our setup. Indeed, we can easily re-formulate our motivating example as a string continuation problem: “3→ 6; 4→”. The solution proposed by Solomonoff (1964) is to select the continuation that admits “the simplest explanation” of the entire string, i.e. that is produced by programs of the shortest length (description length).
Our intuition is that when a continuation is “simple” for a learner, then this learner is biased toward it. We consider a learner M to be biased toward C1 over C2 if the training set and its extension according to C1 has a shorter description length (forM) compared to that of C2. Denoting description length of a dataset D under the learnerM as LM(D), we hypothesise that if LM({C1(xi)}ki=1) < LM({C2(xi)}ki=1), thenM is biased toward C1. Calculating Description Length To find the description length of data under a fixed learner, we use the online (prequential) code (Rissanen, 1978; Grunwald, 2004; Blier & Ollivier, 2018).
The problem of calculating LM(D), D = {xi, yi}ki=1 is considered as a problem of transferring outputs yi one-by-one, in a compressed form, between two parties, Alice (sender) and Bob (receiver). Alice has the entire dataset {xi, yi}, while Bob only has inputs {xi}. Before the transmission starts, both parties agreed on the initialization of the modelM, order of the inputs {x}, random seeds, and the details of the learning procedure. Outputs {yi} are sequences of tokens from a vocabulary V . W.l.o.g. we fix some order over {x}. We assume that, given x, the learnerM produces a probability distribution over the space of the outputs y, pM(y|x).
1Code used in the experiments can be found at https://github.com/facebookresearch/FIND.
The very first output y1 can be sent by using not more than c = |y1| · log |V | nats, using a naïve encoding.2 After that, both Alice and Bob update their learners using the example (x1, y1), available to both of them, and get identical instances ofM1. Further transfer is done iteratively under the invariant that both Alice and Bob start every step t with exactly the same learnersMt−1 and finish with identicalMt. At step t Alice would useMt−1 to encode the next output yt. This can be done using ( − log pMt−1(yt|xt) ) nats (MacKay, 2003). Since Bob has exactly the same model, he can decode the message to obtain yt and use the new pair (xt, yt) to update his model and get Mt. Alice also updates her model, and proceeds to sending the next yt+1 (if any), encoding it with the help ofMt. The cumulative number of nats transmitted is:
LM(D) = − k∑
t=2
log pMt−1(yt|xt) + c. (1)
The obtained code length of Eq. 1 depends on the order in which y are transmitted and the procedure we use to updateM. To account for that, we average out the data order by training with multiple random seeds. Further, for larger datasets, full re-training after adding a new example is impractical and, in such cases, examples can be transmitted in blocks.
If we measure the description length of the training data T shuffled with the hold-out data H , both datasets would have symmetric roles. However, there is a certain asymmetry in the extrapolation problem: we are looking for an extrapolation from T , not vice-versa. To break this symmetry, we always transmit outputs for the entire training data as the first block.
While LM(D) is seemingly different from the “intuitive” measures introduced before, we can illustrate their connection as follows. Consider a case where we first transmit the training outputs as the first block and all of the hold-out data outputs under C, C(H), as the second block. Then the description length is equal to cross-entropy of the trained learner on the hold-out data, recovering a process akin to the “intuitive” measuring of inductive biases. With smaller blocks, the description length also catches whether a learner is capable of finding regularity in the data fast, with few data points; hence it also encodes the speed-of-learning ideas for measuring inductive biases (Chaabouni et al., 2019).
Finally, the description length has three more attractive properties when measuring inductive biases: (a) it is domain-independent, i.e. can be applied for instance in the image domain, (b) it allows comparisons across models that account for model complexity, and (c) it enables direct comparison between two candidate rules (as we will show in the Section 5).
3 TASKS
We describe four tasks that we use to study inductive biases of seq2seq learners. We select those tasks to cover different aspects of learners’ behavior. Each task has a highly ambiguous training set which is consistent with infinite number of generalizations. We pre-select several candidate rules highlighting biases that are useful for language processing and known to exist in humans, or are otherwise reasonable. Our experiments show that these rules cover many cases of the learners’ behavior.
The first two tasks study biases in arithmetic reasoning: Count-or-Memorization quantifies learners’ preference for counting vs. a simple memorization and Add-or-Multiply further probes the learners’ preferences in arithmetic operations. We believe these tasks are interesting, as counting is needed in some NLP problems like processing linearized parse trees (Weiss et al., 2018). The third task, Hierarchical-or-Linear, contrasts hierarchical and linear inductive reasoning. The hierarchical reasoning bias is believed to be fundamental in learning some syntactical rules in human acquisition of syntax (Chomsky, 1965; 1980). Finally, with the Composition-or-Memorization task, we investigate biases for systematic compositionality, which are central for human capabilities in language. Figure 1 illustrates these four tasks.
2As we are interested in comparing candidate generalization rules, the value of the additive constant c is not important, as it is learner- and candidate-independent. In experiments, we subtract it from all measurements.
Published as a conference paper at ICLR 2021
<latexit sha1_base64="2aP1qHe7cCIfZJZf006ToyPFAlM=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimTNNuGZrNLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZRxLBtfG8b6ewtr6xuVXcLu3s7u0fuIdHLR2nirImjUWsOgQ0E1yypuFGsE6iGEREsDYZ38z89gNTmsfy3mQJCyIYSh5yCsZKfbcMALin+HBkQKn4ERNC+m7Fq3pz4FXi56SCcjT67ldvENM0YtJQAVp3fS8xwQSU4VSwaamXapYAHcOQdS2VEDEdTObHT/GpVQY4jJUtafBc/T0xgUjrLCK2MwIz0sveTPzP66YmvAwmXCapYZIuFoWpwCbGsyTwgCtGjcgsAaq4vRXTESigxuZVsiH4yy+vktZ51a9Vr+5qlfp1HkcRHaMTdIZ8dIHq6BY1UBNRlKFn9IrenCfnxXl3PhatBSefKaM/cD5/AGgPlKM=</latexit>
<latexit sha1_base64="zC7LQM7YZYuoBe5SPPwHFlyMIso=">AAACB3icbVDLSsNAFJ34rPUVdSnIYBFchUQq6sqiG5cV7AOaUG6mk2bo5MHMRCmhOzf+ihsXirj1F9z5N07bLLT1wIXDOfdy7z1+yplUtv1tLCwuLa+sltbK6xubW9vmzm5TJpkgtEESnoi2D5JyFtOGYorTdiooRD6nLX9wPfZb91RIlsR3aphSL4J+zAJGQGmpax4AYFewfqhAiOQBu6FMgdDcsk9JNLrsmhXbsifA88QpSAUVqHfNL7eXkCyisSIcpOw4dqq8HIRihNNR2c0k1QsG0KcdTWOIqPTyyR8jfKSVHg4SoStWeKL+nsghknIY+bozAhXKWW8s/ud1MhWcezmL00zRmEwXBRnHKsHjUHCPCUoUH2oCRDB9KyYhCCBKR1fWITizL8+T5onlVK2L22qldlXEUUL76BAdIwedoRq6QXXUQAQ9omf0it6MJ+PFeDc+pq0LRjGzh/7A+PwBSTCY9A==</latexit>
<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>
<latexit sha1_base64="fz5x21CLleIxoxCf0rdZAKFE5ls=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj25nffkKleSwfzSRBP6JDyUPOqLHSQxD0yxW36s5BVomXkwrkaPTLX71BzNIIpWGCat313MT4GVWGM4HTUi/VmFA2pkPsWipphNrP5pdOyZlVBiSMlS1pyFz9PZHRSOtJFNjOiJqRXvZm4n9eNzXhlZ9xmaQGJVssClNBTExmb5MBV8iMmFhCmeL2VsJGVFFmbDglG4K3/PIqaV1UvVr1+r5Wqd/kcRThBE7hHDy4hDrcQQOawCCEZ3iFN2fsvDjvzseiteDkM8fwB87nD4KejV4=</latexit>
<latexit sha1_base64="/YFAJknAn2U8M0GNie5D5uxl4Tw=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimzabYNzSZLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3ds/cA+PWlqmitAmkVyqTgiaciZo0zDDaSdRFOKQ03Y4vpn57QeqNJPi3mQJDWIYChYxAsZKfbcMEALgnmLDkQGl5CMO+27Fq3pz4FXi56SCcjT67ldvIEkaU2EIB627vpeYYALKMMLptNRLNU2AjGFIu5YKiKkOJvPjp/jUKgMcSWVLGDxXf09MINY6i0PbGYMZ6WVvJv7ndVMTXQYTJpLUUEEWi6KUYyPxLAk8YIoSwzNLgChmb8VkBAqIsXmVbAj+8surpHVe9WvVq7tapX6dx1FEx+gEnSEfXaA6ukUN1EQEZegZvaI358l5cd6dj0VrwclnyugPnM8fZqiUog==</latexit>
<latexit sha1_base64="4YyIpL4gWY0aiFz+Nr1J+DsIshc=">AAACCXicbVC7SgNBFJ2Nrxhfq5Y2g0GwWnYlolYGbSwjmAckS7g7mSRDZmaXmVklLGlt/BUbC0Vs/QM7/8bJo9DEAxcO59zLvfdECWfa+P63k1taXlldy68XNja3tnfc3b2ajlNFaJXEPFaNCDTlTNKqYYbTRqIoiIjTejS4Hvv1e6o0i+WdGSY0FNCTrMsIGCu1XQwR4JZivb4BpeIH3OrrBAjNfO9UiBG+bLtF3/MnwIskmJEimqHSdr9anZikgkpDOGjdDPzEhBkowwino0Ir1dRuGECPNi2VIKgOs8knI3xklQ7uxsqWNHii/p7IQGg9FJHtFGD6et4bi/95zdR0z8OMySQ1VJLpom7KsYnxOBbcYYoSw4eWAFHM3opJHxQQY8Mr2BCC+ZcXSe3EC0rexW2pWL6axZFHB+gQHaMAnaEyukEVVEUEPaJn9IrenCfnxXl3PqatOWc2s4/+wPn8AX7kmZQ=</latexit>
<latexit sha1_base64="bihfQSgohUCaiZLH8E/K/puHk0I=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmrRfrrhVdw6ySrycVCBHo1/+6g1ilkYoDRNU667nJsbPqDKcCZyWeqnGhLIxHWLXUkkj1H42P3RKzqwyIGGsbElD5urviYxGWk+iwHZG1Iz0sjcT//O6qQmv/IzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaiijJjsynZELzll1dJ+6Lq1arXzVqlfpPHUYQTOIVz8OAS6nAHDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fxz2M8Q==</latexit>
<latexit sha1_base64="9Y1xYV9hakFu8xV0cEnjrRUGU9E=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmkG/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1Wvm7VK/SaPowgncArn4MEl1OEOGtACBgjP8ApvzqPz4rw7H4vWgpPPHMMfOJ8/yMGM8g==</latexit>
<latexit sha1_base64="9Y1xYV9hakFu8xV0cEnjrRUGU9E=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmkG/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1Wvm7VK/SaPowgncArn4MEl1OEOGtACBgjP8ApvzqPz4rw7H4vWgpPPHMMfOJ8/yMGM8g==</latexit>
<latexit sha1_base64="jnXbe6btQmwXgibKAN2VhSH1B5w=">AAAB+HicbVBNSwMxEM3Wr1o/uurRS7AInsquFNRb0YvHCvYD2qXMptk2NJssSVapS3+JFw+KePWnePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3dsvuweHLS1TRWiTSC5VJwRNORO0aZjhtJMoCnHIaTsc38z89gNVmklxbyYJDWIYChYxAsZKfbcMuKfYcGRAKfmIoe9WvKo3B14lfk4qKEej7371BpKkMRWGcNC663uJCTJQhhFOp6VeqmkCZAxD2rVUQEx1kM0Pn+JTqwxwJJUtYfBc/T2RQaz1JA5tZwxmpJe9mfif101NdBlkTCSpoYIsFkUpx0biWQp4wBQlhk8sAaKYvRWTESggxmZVsiH4yy+vktZ51a9Vr+5qlfp1HkcRHaMTdIZ8dIHq6BY1UBMRlKJn9IrenCfnxXl3PhatBSefOUJ/4Hz+AGnJkvQ=</latexit>
<latexit sha1_base64="y/odGoA3GMubqSuaA2hvPYvW6gw=">AAAB+HicbVBNSwMxEM3Wr1o/uurRS7AInsquFNRb0YvHCvYD2qVk02wbmk2WZFapS3+JFw+KePWnePPfmLZ70NYHA4/3ZpiZFyaCG/C8b6ewtr6xuVXcLu3s7u2X3YPDllGppqxJlVC6ExLDBJesCRwE6ySakTgUrB2Ob2Z++4Fpw5W8h0nCgpgMJY84JWClvlsOcU/z4QiI1uoRh3234lW9OfAq8XNSQTkafferN1A0jZkEKogxXd9LIMiIBk4Fm5Z6qWEJoWMyZF1LJYmZCbL54VN8apUBjpS2JQHP1d8TGYmNmcSh7YwJjMyyNxP/87opRJdBxmWSApN0sShKBQaFZyngAdeMgphYQqjm9lZMR0QTCjarkg3BX355lbTOq36tenVXq9Sv8ziK6BidoDPkowtUR7eogZqIohQ9o1f05jw5L86787FoLTj5zBH6A+fzB2zfkvY=</latexit>
<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>
<latexit sha1_base64="dl6D95tdDzd51TNAft5O5SVEngo=">AAACEXicbVC7SgNBFJ31GeMrammzGAQbw64E1C6YxkaIYB6QhDA7uUkG57HM3BXjkl+w8VdsLBSxtbPzb9xsUmjigWEO55zLzD1BKLhFz/t2FhaXlldWM2vZ9Y3Nre3czm7N6sgwqDIttGkE1ILgCqrIUUAjNEBlIKAe3JbHfv0OjOVa3eAwhLakfcV7nFFMpE7uqIVwj0bG6c0xLutI4bE2x1cgteEPaW40ynZyea/gpXDniT8leTJFpZP7anU1iyQoZIJa2/S9ENsxNciZgFG2FVkIKbulfWgmVFEJth2nG43cw0Tpuj1tkqPQTdXfEzGV1g5lkCQlxYGd9cbif14zwt5ZO+YqjBAUmzzUi4SL2h3X43a5AYZimBDKDE/+6rIBNZRhUuK4BH925XlSOyn4xcL5dTFfupjWkSH75IAcEZ+ckhK5JBVSJYw8kmfySt6cJ+fFeXc+JtEFZzqzR/7A+fwBMTKegQ==</latexit>
<latexit sha1_base64="AU3h0Xycm3+2dblNcl19kE3syhg=">AAACF3icbVC7SgNBFJ31GeMrammzGASbhF0JqF0wjY0QwTwgCWF2cpMMmdlZZu6Kcclf2PgrNhaK2Grn37i7SaGJB4Y5nHMul3u8QHCDjvNtLS2vrK6tZzaym1vbO7u5vf26UaFmUGNKKN30qAHBfaghRwHNQAOVnoCGN6okfuMOtOHKv8VxAB1JBz7vc0Yxlrq5YhvhHrWM0p9jVFEyUIYnbkHpwjVIpflDmp5Mst1c3ik6KexF4s5InsxQ7ea+2j3FQgk+MkGNablOgJ2IauRMwCTbDg0ElI3oAFox9akE04nSuyb2caz07L7S8fPRTtXfExGVxoylFyclxaGZ9xLxP68VYv+8E3E/CBF8Nl3UD4WNyk5KsntcA0MxjgllOi6D2WxINWUYV5mU4M6fvEjqp0W3VLy4KeXLl7M6MuSQHJET4pIzUiZXpEpqhJFH8kxeyZv1ZL1Y79bHNLpkzWYOyB9Ynz9YnKFI</latexit>
<latexit sha1_base64="z+j/XVE4K8yKa/1sdHAWANdl7aI=">AAACEnicbVC7TgJBFJ31ifhatbTZSEy0gOwaErUj2lBYYCKPBAiZHS4wYWZ3M3PXSDZ8g42/YmOhMbZWdv6Ns0Ch4Ekmc3LOvXfmHj8SXKPrfltLyyura+uZjezm1vbOrr23X9NhrBhUWShC1fCpBsEDqCJHAY1IAZW+gLo/vE79+j0ozcPgDkcRtCXtB7zHGUUjdezTFsIDKplMbo5JmYOiig1MhciHKn9jBlM1Hmc7ds4tuBM4i8SbkRyZodKxv1rdkMUSAmSCat303AjbCVXImYBxthVriCgb0j40DQ2oBN1OJiuNnWOjdJ1eqMwJ0JmovzsSKrUeSd9USooDPe+l4n9eM8beRTvhQRQjBGz6UC8WDoZOmo/T5QoYipEhlClu/uqwAVWUoUkxDcGbX3mR1M4KXrFweVvMla5mcWTIITkiJ8Qj56REyqRCqoSRR/JMXsmb9WS9WO/Wx7R0yZr1HJA/sD5/AILSnp4=</latexit>
<latexit sha1_base64="Qjee5BVEq+vKsWYHnWbnBQlIpYA=">AAACC3icbVDLSsNAFJ34rPUVdekmtAhuWhIpqLuqGzdCBfuANpTJZNIOnUnCzI0YQvdu/BU3LhRx6w+4829M0iy09cDlHs65l5l7nJAzBab5rS0tr6yurZc2yptb2zu7+t5+RwWRJLRNAh7InoMV5cynbWDAaS+UFAuH064zucr87j2VigX+HcQhtQUe+cxjBEMqDfXKAOgDSJHknUFy4bq1QNZuIg4s5PF0Wh7qVbNu5jAWiVWQKirQGupfAzcgkaA+EI6V6ltmCHaCJTDC6bQ8iBQNMZngEe2n1MeCKjvJb5kaR6niGl4g0/LByNXfGwkWSsXCSScFhrGa9zLxP68fgXdmJ8wPI6A+mT3kRdyAwMiCMVwmKQEepwQTydK/GmSMJSaQxpeFYM2fvEg6J3WrUT+/bVSbl0UcJXSIKugYWegUNdE1aqE2IugRPaNX9KY9aS/au/YxG13Sip0D9Afa5w/83pun</latexit>
<latexit sha1_base64="a4iN37u8WAC9fwuLOaFSCqDHf/w=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimzabYNzSZLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3ds/cA+PWlqmitAmkVyqTgiaciZo0zDDaSdRFOKQ03Y4vpn57QeqNJPi3mQJDWIYChYxAsZKfbcMgHuKDUcGlJKPOLTouxWv6s2BV4mfkwrK0ei7X72BJGlMhSEctO76XmKCCSjDCKfTUi/VNAEyhiHtWiogpjqYzI+f4lOrDHAklS1h8Fz9PTGBWOssDm1nDGakl72Z+J/XTU10GUyYSFJDBVksilKOjcSzJPCAKUoMzywBopi9FZMRKCDG5lWyIfjLL6+S1nnVr1Wv7mqV+nUeRxEdoxN0hnx0geroFjVQExGUoWf0it6cJ+fFeXc+Fq0FJ58poz9wPn8AaY6UpA==</latexit>
<latexit sha1_base64="MdWGCyoEtQk4iPFMQu2DAITPSsU=">AAACBnicbVDLSsNAFJ34rPUVdSnCYBFchUQq6sqiG5cV7AOaUCbTSTN0ZhJmJkoJXbnxV9y4UMSt3+DOv3HaZqGtBy4czrmXe+8JU0aVdt1va2FxaXlltbRWXt/Y3Nq2d3abKskkJg2csES2Q6QIo4I0NNWMtFNJEA8ZaYWD67HfuidS0UTc6WFKAo76gkYUI22krn2AoC9pP9ZIyuQB+rFKESa5455iPrrs2hXXcSeA88QrSAUUqHftL7+X4IwToTFDSnU8N9VBjqSmmJFR2c8UMQsGqE86hgrEiQryyRsjeGSUHowSaUpoOFF/T+SIKzXkoenkSMdq1huL/3mdTEfnQU5Fmmki8HRRlDGoEzjOBPaoJFizoSEIS2puhThGEmFtkiubELzZl+dJ88Txqs7FbbVSuyriKIF9cAiOgQfOQA3cgDpoAAwewTN4BW/Wk/VivVsf09YFq5jZA39gff4AheWYiQ==</latexit>
<latexit sha1_base64="Skuwggtv7n9iurfu6b6dnycPw7E=">AAAB63icbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWsB/QLiWbZtvQJLskWaEs/QtePCji1T/kzX9jtt2Dtj4YeLw3w8y8MBHcWM/7RqW19Y3NrfJ2ZWd3b/+genjUNnGqKWvRWMS6GxLDBFesZbkVrJtoRmQoWCec3OV+54lpw2P1aKcJCyQZKR5xSmwuhQ6Das2re3PgVeIXpAYFmoPqV38Y01QyZakgxvR8L7FBRrTlVLBZpZ8alhA6ISPWc1QRyUyQzW+d4TOnDHEUa1fK4rn6eyIj0pipDF2nJHZslr1c/M/rpTa6DjKuktQyRReLolRgG+P8cTzkmlErpo4Qqrm7FdMx0YRaF0/FheAvv7xK2hd1/7J+83BZa9wWcZThBE7hHHy4ggbcQxNaQGEMz/AKb0iiF/SOPhatJVTMHMMfoM8f93WONg==</latexit>
<latexit sha1_base64="fz5x21CLleIxoxCf0rdZAKFE5ls=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj25nffkKleSwfzSRBP6JDyUPOqLHSQxD0yxW36s5BVomXkwrkaPTLX71BzNIIpWGCat313MT4GVWGM4HTUi/VmFA2pkPsWipphNrP5pdOyZlVBiSMlS1pyFz9PZHRSOtJFNjOiJqRXvZm4n9eNzXhlZ9xmaQGJVssClNBTExmb5MBV8iMmFhCmeL2VsJGVFFmbDglG4K3/PIqaV1UvVr1+r5Wqd/kcRThBE7hHDy4hDrcQQOawCCEZ3iFN2fsvDjvzseiteDkM8fwB87nD4KejV4=</latexit>
<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>
<latexit sha1_base64="qnFGHcCsleGiOXNIMFDZUtpqE2A=">AAACHnicbVDJSgNBEO1xN25Rj14ag+DFYcYF9SZ68ahgVEhCqOlUMo3dM0N3jRKGfIkXf8WLB0UET/o3dmIEt9c0vHpVRVW9KFPSUhC8eyOjY+MTk1PTpZnZufmF8uLSuU1zI7AqUpWaywgsKplglSQpvMwMgo4UXkRXR/38xTUaK9PkjLoZNjR0EtmWAshJzfIOxUYKrMc2A4FF4O9o3YOvcCPwt1xcN7ITExiT3nDov2a5EvjBAPwvCYekwoY4aZZf661U5BoTEgqsrYVBRo0CDEmhsFeq5xbdxCvoYM3RBDTaRjE4r8fXnNLi7dS4nxAfqN87CtDWdnXkKjVQbH/n+uJ/uVpO7b1GIZMsJ0zE56B2rjilvO8Vb0mDglTXERBGul25iMGAIOdoyZkQ/j75Lznf9MNtf/90u3JwOLRjiq2wVbbOQrbLDtgxO2FVJtgtu2eP7Mm78x68Z+/ls3TEG/Yssx/w3j4AA+KiZg==</latexit>
<latexit sha1_base64="gbmyyfTl9bxXUBY/8c/CKSNznlI=">AAACKXicbVBNS8NAEN34WetX1KOXxSJ4sSRSUU8WvXhUsCq0pWy202ZxNwm7E6WE/h0v/hUvCop69Y+4TSP49WDhzXszzM4LEikMet6bMzE5NT0zW5orzy8sLi27K6sXJk41hwaPZayvAmZAiggaKFDCVaKBqUDCZXB9PPIvb0AbEUfnOEigrVg/Ej3BGVqp49Yx1IJDKzQJ45B51V2lhsFXuT2uW1r0Q2Rax7f0y/Jzhx523IpX9XLQv8QvSIUUOO24T61uzFMFEXLJjGn6XoLtjGkUXMKw3EoN2A3XrA9NSyOmwLSz/NIh3bRKl/ZibV+ENFe/T2RMGTNQge1UDEPz2xuJ/3nNFHv77UxESYoQ8fGiXiopxnQUG+0KDRzlwBLGtbB/pTxkmnG04ZZtCP7vk/+Si52qX6senNUq9aMijhJZJxtki/hkj9TJCTklDcLJHXkgz+TFuXcenVfnfdw64RQza+QHnI9P8/inCQ==</latexit>
By ak we denote a sequence that contains token a repeated k times. For training, we represent sequences in a standard way: the tokens are one-hot-encoded separately, and we append a special end-of-sequence token to each sequence. Input and output vocabularies are disjoint.
Count-or-Memorization: In this task, we contrast learners’ preferences for counting vs. memorization. We train models to fit a single training example with input al and output bl (i.e., to perform the mapping al → bl) and test it on am with m ∈ [l−10, l+10]. If a learner learns the constant function, outputting bl independently of its inputs, then it follows the mem strategy. On the other hand, if it generalizes to the am → bm mapping, then the learner is biased toward the count strategy. Add-or-Multiply: This task is akin to the motivating example in Section 1. The single training example represents a mapping of an input string al to an output string b2l. As test inputs, we generate am for m in the interval [l − 3, l + 3]. We consider the learned rule to be consistent with mul if for all m, the input/output pairs are consistent with am → b2m. Similarly, if they are consistent with am → bm+l, we say that the learner follows the addition rule, add. Finally, the learner can learn a constant mapping am → b2l for any m. Again, we call this rule mem. Hierarchical-or-Linear: For a fixed depth d, we train learners on four training examples xdyxd → y where x, y ∈ {a, b}.3 Each training example has a nested structure, where d defines its depth. A learner with a hierarchical bias (hierar), would output the middle symbol. We also consider the linear rule (linear) in which the learner outputs the (d+1)th symbol of its input. To probe learners’ biases, we test them on inputs with different depths m ∈ [d− 2, d + 2]. Note that to examine the linear rule (i.e. if the learner outputs the (d+1)th symbol of any test input of depth m), we need m ≥ d2 . Similar to the previous tasks, there is no vocabulary sharing between a model’s inputs and outputs (input and output tokens a and b are different).
Composition-or-Memorization: We take inspiration from SCAN (Lake & Baroni, 2017), a benchmark used for studying systematic generalization of seq2seq learners.4 The input vocabulary has N symbols ai that are one-to-one mapped into N output symbols bi (i.e., ai → bi). In addition, there is a modifier token thrice: when thrice precedes an input symbol ai, the corresponding output is repeated three times: thrice ai → bibibi. We train a learner on all non-compositional examples (ai → bi) and M (M < N ) compositional examples (thrice ai → bibibi). At test time, we feed the learner with the remaining compositional examples (thrice ai, i > M ). If the learner generalizes to the mapping thrice ai → bibibi for i > M , we consider it to be biased toward a compositional reasoning (comp). As an alternative generalization,
3This mapping consist then on four combinations adbad → b; adaad → a; bdabd → a and bdbbd → b. 4In Appendix, we report a study of inductive biases on the SCAN data.
we consider a mapping where all inputs containing ai are mapped into bi: thrice ai → bi (i > M ). We call this generalization memorization (mem).
4 METHODOLOGY
4.1 SEQUENCE-TO-SEQUENCE LEARNERS
We experiment with three standard seq2seq models: LSTM-based seq2seq (LSTM-s2s) (Sutskever et al., 2014), CNN-based seq2seq (CNN-s2s) (Gehring et al., 2017), and Transformer (Vaswani et al., 2017). All share a similar Encoder/Decoder architecture (Sutskever et al., 2014).
LSTM-s2s Both Encoder and Decoder are implemented as LSTM cells (Hochreiter & Schmidhuber, 1997). Encoder encodes its inputs incrementally from left to right. We experiment with architectures without (LSTM-s2s no att.) and with (LSTM-s2s att.) an attention mechanism (Bahdanau et al., 2014). For the first three tasks, both Encoder and Decoder are single-layer LSTM cells with hidden size of 512 and embedding of dimension 16.
CNN-s2s Encoder and Decoder are convolutional networks (LeCun et al., 1990), followed by GLU non-linearities (Dauphin et al., 2017) and an attention layer. To represent positions of input tokens, CNN-s2s uses learned positional embeddings. Encoder and Decoder networks have one layer with 512 filters and a kernel width of 3. We set the embedding size to 16.
Transformer Encoder and Decoder are implemented as a sequence of (self-)attention and feedforward layers. We use sinusoidal position embeddings. Both Encoder and Decoder contain one transformer layer. The attention modules have 8 heads, feed-forward layers have dimension of 512 and the embedding is of dimension 16.
In Appendix we report experiments where we vary hyperparameters of the learners.
4.2 TRAINING AND EVALUATION
For all tasks, we follow the same training procedure. We train with Adam optimizer (Kingma & Ba, 2014) for 3000 epochs. The learning rate starts at 10−5 and increases for the first 1000 warm-up updates till reaching 10−3. We include all available examples in a single batch. We use teacher forcing (Goodfellow et al., 2016) and set the dropout probability to 0.5. For each learner, we perform training and evaluation 100 times, changing random seeds. At generation, to calculate FPA, we select the next token greedily. We use the model implementations from fairseq (Ott et al., 2019).
As discussed in Section 2, when calculating L, we use the training examples as the first transmitted block at t = 1. In Count-or-Memorization and Add-or-Multiply this block contains one example, in Hierarchical-or-Linear it has 4 examples, and in Composition-or-Memorization it has N + M examples. Next, we transmit examples obtained from the candidate rules in a randomized order, by blocks of size 1, 1, 1, and 4 for Count-or-Memorization, Add-or-Multiply, Composition-orMemorization, and Hierarchical-or-Linear respectively. At each step, the learner is re-trained from the same initialization, using the procedure and hyper-parameters as discussed above.
Our training setup is typical for seq2seq training. It does not include any additional pressure towards description length minimization. Description length is solely used as a measure of inductive biases.
5 EXPERIMENTS
Count-or-Memorization We investigate here learners’ biases toward count and mem rules. We provide a single example al → bl as the training set, varying l ∈ {10, 20, 30, 40}. We report the learners’ performances in Table 1a. We observe that, independently of the length of the training example l, CNN-s2s and Transformer learners inferred perfectly the mem rule with FPA-mem > 0.90 (i.e. more than 90% of the random seeds output bl for any given input am).
However, LSTM-based learners demonstrate a more complex behavior. With l = 10, both learners (with and without attention) exhibit a preference for mem. Indeed, while these learners rarely generalize perfectly to any of the hypothesis (0.0 FPA (no att.), 0.2/0.0 FPA for mem/count (att.)), they have significantly lower L-mem. As l increases, LSTM-based learners become more biased
toward count. Surprisingly, for l ≥ 30, most learner instances show sufficiently strong inductive biases to infer perfectly the non-trivial count hypothesis. With l = 40, 99% of random seeds of LSTM-s2s att. and all (100%) of LSTM-s2s no att. seeds generalized perfectly to count.
Further, we see that if L shows similar trends, it has a higher sensitivity. For example, while both LSTM-based learners have a similar FPA with l = 40, L demonstrates that LSTM-s2s no att. has a stronger count bias.
Add-or-Multiply In this task, we examine learners’ generalization after training on the single example al → b2l. We vary l ∈ {5, 10, 15, 20}. In Table 1b, we report FPA and L for the three generalization hypotheses, add, mul, and mem. We observe, similarly to the previous task, that CNN-s2s and Transformer learners always converge perfectly to memorization.
In contrast, LSTM-based learners show non-trivial generalizations. Examining first LSTM-s2s att., when l=5, we note that mem has a high FPA and an L considerably lower than others. This is consistent with the learner’s behavior in the Count-or-Memorization task. As we increase l, more interesting behavior emerges. First, L-mem decreases as l increases. Second, mul-type preference increases with l. Finally, L-add presents a U-shaped function of l. That is, for the medium example length l, the majority of learners switch to approximating the add rule (for l = 10). However, when l grows further, a considerable fraction of these learners start to follow a mul-type rule. Strikingly, 98% of LSTM-s2s att. seeds generalized perfectly to the non-trivial mul rule. As for LSTM-s2s no att., we do not observe a strong bias to infer any of the rules when l=5. However, when increasing l, the LSTM-s2s no att. behaves similarly to LSTM-s2s att.: at first it has a preference for add (FPA-add=0.95, for l=10) then for mul (e.g. FPA-mul=0.94, for l=20).
Hierarchical-or-Linear We look now at learners’ preference for either hierar or linear generalizations. The architectures we use were only able to consistently learn the training examples with the depth d not higher than 4. Hence, in this experiment, we set d to 4.
We report in Table 1c the learners’ FPA and L. We observe that CNN-s2s exhibits a strikingly different bias compared to all other learners with a perfect agreement with the linear rule. In contrast, Transformer learners show a clear preference for hierar with a high FPA (0.69) and a low L (1.21). Surprisingly, this preference increases with the embedding size and Transformers with embedding size ≥ 64 admit an FPA-hierar of 1.00 (see Appendix for more details). LSTM-s2s att. learners demonstrate also a similar preference for hierar with an FPA of 0.30 and a considerably lower L than L-hierar. Finally, while only 5% of LSTM-s2s no att. instances generalized to perfect hierar (and none to linear), L confirms their preference for the hierar hypothesis.
Composition-or-Memorization In this task, we set the number of primitives N = 40 and vary the number of compositional examples M ∈ {6, 24, 36} seen during training. Results are reported in Table 1d. First, we observe that FPA is only informative for CNN-s2s when trained with a large M . Indeed, for M = 6, CNN-s2s does not infer any of the candidate rules. However, according to description length L, we note a significant preference for mem over comp. More compositional examples CNN-based learners see at training, more biased they become toward comp. The remaining learners have zero FPA for both candidate rules. However, according to description length, LSTMbased learners have preferences similar to CNN-s2s, although weaker. That is, they show a preference for mem for low M , that declines in favor of comp as M increases. In contrast, Transformers show a strong bias for mem with all tested M .
Overall, across all the above experiments, we see that seq2seq learners demonstrate strikingly different biases. In many cases, these biases lead to non-trivial generalizations when facing ambiguity in the training data. This spans tasks that probe for memorization, arithmetic, hierarchical, and compositional reasoning. We found that a single example is sufficient for LSTM-based learners to learn counting, addition, and multiplication. Moreover, within the same task, they can switch from one explanation to another, depending on the training example length, with Addition-or-Multiplication being the task where this switch happens twice. In contrast, CNN-s2s and Transformers show a strong bias toward memorization. Furthermore, all learners except for CNN-s2s demonstrate a strong bias toward the hierarchical behavior. In the task of compositional generalization, CNN-s2s shows a strong bias toward compositional reasoning that appears after a few compositional training examples. On the other hand, Transformers show a preference for memorization over compositional generalization.
We see that the conclusions derived from comparing the description length of the candidate rules are in agreement with the results under accuracy-based metrics, but provide a more nuanced picture.
Robustness to hyper-parameters We observe that learners’ biases depend, in many cases, on the length/number of the input examples. In Appendix, we examine impact of other hyper-parameters. In particular, we study impact of (1) learners’ architecture size, by varying the number of layers, hidden and embedding sizes, and (2) dropout probability. Our results show that in some cases a learner’s architecture size can influence the strength of inductive biases, but rarely modify them: among the 136 tested settings, we observe only 3 cases of switching the preferences. We also found, in line with (Arpit et al., 2017), that large dropout probabilities can prevent mem-type generalization. Finally, in Appendix we show that a variant of Transformer learners, namely the joint source-target self-attention learner (He et al., 2018; Fonollosa et al., 2019), displays the same preferences as the standard Transformer learners. This variant resembles the “decoder-only” architecture used in language modeling (Radford et al., 2019; Brown et al., 2020). This result demonstrates that our tasks and bias measures could be applied for studying inductive biases of language model architectures.
6 RELATED WORK
Dessì & Baroni (2019) found that, unlike LSTM-s2s learners, CNN-s2s can perform compositional generalization on SCAN. Our experiments indicate that this only happens when enough compositional examples are provided in the training. Moreover, in such a case, attention-enabled LSTM-s2s also start to prefer compositional generalization over memorization.
McCoy et al. (2020) studied inductive biases of recurrent neural architectures in two synthetic tasks, English question formation and tense inflection. They found that only tree-based architectures show robust preference for hierarchical reasoning, in contrast to LSTM-s2s learners that generalized linearly. Our experiments on the hyperparameter robustness, reported in Appendix, indicate that the preferences over linear/hierarchical reasoning are strongly affected by the dropout probability, with learners shifting to linear behavior with low probabilities. As McCoy et al. (2020) experimented with a low dropout probability of 0.1, we believe this explains the misalignment of the conclusions.
Overall, our study shows that inductive biases are more complicated than it seemed in these prior works and a more careful analysis is crucial. We believe that our extremely controlled setup with very little confounds is a good addition to those studies.
Another line of research investigates theoretically learners’ capabilities, that is, the classes of the hypothesis that a learner can discover (Siegelmann & Sontag, 1992; Suzgun et al., 2019; Merrill et al., 2020). For example, Weiss et al. (2018) demonstrated that LSTM cells can count. In turn, we demonstrate that LSTM-s2s learners are not only capable but also biased toward arithmetic behavior.
7 DISCUSSION AND CONCLUSION
In this work, we studied inductive biases of standard seq2seq learners, Transformer-, LSTM-, and CNN-based. To do so, we introduced four new tasks, which allowed us to cover an interesting spectrum of behaviors useful for language learning. In particular, we considered arithmetic, hierarchical, and compositional “reasoning”. Next, we connected the problem of finding and measuring inductive biases to Solomonoff’s theory of induction and proposed to use a dataset’s description length under a learner as a tool for sensitive measurement of inductive biases.
In our experiments, we found that the seq2seq learners have strikingly different inductive biases and some of them generalize non-trivially when facing ambiguity. For instance, a single training example is sufficient for LSTM-based learners to learn perfectly how to count, to add and to multiply by a constant. Transformers and, to a lesser degree, LSTM-s2s demonstrated preferences for the hierarchical bias, a bias that has been argued to govern children’s acquisition of syntax. Interestingly, such biases arose with no explicit wiring for them. Our results support then Elman et al. (1998)’s theory which states that human’s inductive biases can arise from low-level architectural constraints in the brain with no need for an explicit encoding of a linguistic structure. However, how the brain, or, more generally, a learner is wired to admit a specific inductive bias is still an important open question.
Across our experiments, we also observed that description length is consistent with “intuitive” measurements of inductive biases, and, at the same time, it turned out to be more sensitive. This also indicates that, in the presence of ambiguity in the training data, a learner is more likely to follow the alternative with the shorter description length (i.e. the simplest one) when applied on unseen data, showing consistency with the prescriptions of the theory of induction (Solomonoff, 1964). A similar simplicity preference is argued to play a role in human language acquisition (Perfors et al., 2011).
Our work provides simple tools to investigate learners’ biases. We first show that FPA is an intuitive measure to study biases when provided with simple tasks. Second, we present description length as a robust measure to fairly compare learners’ biases. This metric considers learners’ size and their ease of learning as opposed to accuracy-based metrics. Besides, it is a model- and task-agnostic measure that succeeds in unveiling learners’ biases even when presented with more complex tasks with spurious correlations.
Our findings can guide for architecture selection in the low-data regimes where inductive biases might have a higher influence on model’s generalization performance. Large sparse datasets can also benefit from predictable behavior in few-shot scenarios akin to what we consider.
Finally, our results demonstrate that relatively large deep learning models can generalize non-trivially from as little as one example – as long as the task is aligned with the their inductive biases. We believe this should reinforce interest in future work on injecting useful inductive biases in our learners and, we hope, our findings and setup can provide a fertile ground for such work.
ACKNOWLEDGEMENTS
The authors are grateful to Marco Baroni, Emmanuel Dupoux, Emmanuel Chemla and participants of the EViL seminar for their feedback on our work.
A CAN SEQ2SEQ LEARNERS MULTIPLY BY 3?
In our experiments on the Multiply-or-Add task we saw that LSTM-s2s learners are able to learn to multiply by 2 from a single example. A natural further question is whether these learners can learn to multiply by larger numbers? Here we only provide a preliminary study in a hope to inspire more focused studies.
We build a task that is similar to Multiply-or-Add, but centered around multiplication by 3 instead of 2. The single training example represents a mapping of an input string al to an output string b3l. As test inputs, we use am with m coming from an interval [l − 3, l + 3].
Since 3x can be represented as a combination of addition and multiplication in several ways (3x = x + 2x = 2x + x), we consider 4 different candidate rules. As before, by mem we denote the constant mapping from am → b3l. mul1 represents the mapping am → b2l+m. mul2 corresponds to am → bl+2m and mul3 denotes am → b3m. The “explanation” mul1 is akin to the add rule in the Multiply-or-Add task. We use the same hyperparameters and training procedure described in Section 4.
We report the results in Table 2. Like the results observed in the Multiply-or-Add task, both CNN-s2s and Transformer learners show a strong preference for the mem rule while LSTM-based learners switch their generalization according to the length of the training example l. Indeed, for CNN-s2s and Transformer, we note an FPA-mem>0.97 independently of l (with L-mem significantly lower than others). LSTM-s2s att. learners start inferring the mem rule for l = 5 (FPA=0.64, L=2.44), then switch to comparable preference for mul2 and mul3 when l = 10, and finally show a significant bias toward the mul3 hypothesis for l ∈ {15, 20} (e.g. FPA-mul3=0.76 for l = 15). LSTM-s2s no att. learners are also subject to a similar switch of preference. That is, for l = 5, these learners have a significant bias toward mul1 (FPA=0.49). Strikingly, when l = 10, 92% LSTM-s2s no att. learners inferred perfectly the mul2 rule after one training example. Finally, we observe again another switch to approximate mul3 for l ∈ {15, 20}.
Overall, while CNN-s2s and Transformer learners show a significant and robust bias toward mem, LSTM-based learners generalize differently depending on the training input length. In particular, our results suggest that these learners avoid adding very large integers by switching to the multiplicative explanation in those cases. Answering our initial question, we see that LSTM-based learners can learn to multiply by 3 from a single example.
5Only 21% of learners succeeded in learning the training example in this setting. 6Only 51% of learners succeeded in learning the training example in this setting.
B ROBUSTNESS TO CHANGES IN ARCHITECTURE
In this section, we examine how changing different hyper-parameters affects learners’ preferences for memorization, arithmetic, hierarchical and compositional reasoning. In particular, we vary the number of layers, hidden and embedding sizes of the different learners and test their generalization on Count-or-Memorization, Add-or-Multiply, Hierarchical-or-Linear, and Composition-or-Memorization tasks.
In all the experiments, we fix l = 40 and l = 20 for Count-or-Memorization and Add-or-Multiply respectively, d = 4 for Hierarchical-or-Linear and M = 36 for Composition-or-Memorization. Finally, we keep the same training and evaluation procedure as detailed in Section 4.2 of the main text. However, we use 20 different random seeds instead of 100.
B.1 NUMBER OF HIDDEN LAYERS (NLayer)
We experiment with the standard seq2seq learners described in the main paper and vary the number of layers NLayer ∈ {1, 3, 10}. Results are reported in Table 3.
First, when looking at the interplay between mem and count (Table 3a), we observe that, independently of NLayer , more than 97% of CNN-s2s and Transformer learners inferred perfectly the mem rule (i.e. output b40 for any given input am). Further, if the preference for count decreases with the increase of NLayer , LSTM-s2s att. learners display in all cases a significant bias toward count with a large FPA and an L significantly lower than L-mem . However, we note a decline of the bias toward count when considering LSTM-s2s no att. learners. For NLayer = 1, 100% of the seeds generalize to perfect count, versus 6% for NLayer = 10. Note that this lower preference for count is followed by an increase of preference for mem. However, there is no significant switch of preferences according to L.
Second, we consider the Add-or-Multiply task where we examine the three generalization hypothesis add, mul and mem. Results are reported in Table 3b. Similarly to the previous task, Transformer and CNN-s2s learners are robust to the number of layers change. They perfectly follow the mem rule with FPA-mem=1.00. However, LSTM-based learners show a more complex behavior: If single-layer LSTM-s2s att. and no att. demonstrate a considerable bias for mul (FPA-mul>0.94), this bias fades in favor of add and memo for larger architectures.
Third, in Table 3c, we observe that larger learners are slightly less biased toward hierar and linear. However, we do not observe any switch of preferences. That is, across different Nlayers, CNN-s2s learners prefer the linear rule, whereas Transformers and, to a lesser degree, LSTM-based learners show a significant preference for the hierar rule.
Finally, we do not observe an impact of NLayer on the Composition-or-Memorization task (see Table 3d), demonstrating the robustness of biases w.r.t. learners’ size.
In sum, we note little impact of NLayer on learners’ generalizations. Indeed, LSTM-based learners can learn to perform counting from a single training example, even when experimenting with 10-layer architectures. For most tested NLayer , they favor mul and add over mem. In contrast, Transformer and CNN-s2s perform systematic memorization of the single training example. Furthermore, independently of NLayer , Transformer and LSTM-based learners show a bias toward the hierar hypothesis over the linear one, while CNN-based learners do the opposite. Finally, the compositional experiments further support how inductive biases are barely influenced by the change of the number of layers.
B.2 HIDDEN SIZE (SHidden)
We experimented in the main text with standard seq2seq learners when hidden size SHidden = 512. In this section, we look at the effect of SHidden varying it in {128, 512, 1024}. We report learners performances in Table 4.
First, Table 4a demonstrates how minor effect hidden size has on learners counting/memorization performances. Indeed, for any given SHidden, between 84% and 100% of LSTM-based learners learn perfect counting after only one training example. Similarly, and with even a lower variation, more than 91% of Transformer and CNN-s2s learners memorize the single training example outputting b40 for any given am.
The same observation can be made when studying the interplay between the hierar and linear biases (Table 4c) and comp and mem biases (Table 4d). Concretely, Tables 4c and 4d show that learners’ generalizations are stable across SHidden values with the exception of LSTM-based learners that loose significance for some tested SHidden values. Yet, there is no switch of preference.
Finally, as demonstrated in Table 4b, all Transformer and CNN-s2s learners perform perfect memorization when tested on the Add-or-Multiply task independently of their hidden sizes. Both LSTM-based learners are significantly biased toward mul for SHidden ∈ {512, 1024}. However, when experimenting with smaller
SHidden (=128), we detect a switch of preference for LSTM-s2s no att. learners. The latter start approximating add-type rule (with significantly lower L). Lastly, we do not distinguish any significant difference between add and mul for LSTM-s2s att. when SHidden = 128.
Taken together, learners’ biases are quite robust to SHidden variations. We however note a switch of preference from mul to add for LSTM-s2s no att. learners when decreasing SHidden. Furthermore, we see a loss of significant preference in five distinct | 1. What is the focus of the paper regarding sequence-to-sequence models?
2. What are the strengths of the proposed approach, particularly in comparison to other methods?
3. What are the weaknesses of the paper, especially regarding the experimental design and scope?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any suggestions or ideas for future research related to the paper's topic? | Review | Review
This work proposes to use description length (DL) to measure the seq2seq model's inductive bias.
Strength: It is clearly shown that DL gives more smooth measurements than FPA. Also, the experiments are principled, and well designed. And the conclusion are interesting and clear. I do believe that this framework can be utilized by future work in this direction to get a better understanding of seq2seq models.
Finally, the paper is well written, I am able to get the author's points.
Weakness: My major concern is the scale or completeness of the experiments. The authors concentrate on a training set of a few samples, which is far away from the usual large-data setting for LMs.
Moreover, the training data concerned only exhibit one or two kinds of bias, while in real data, there are usually various kinds of biases. I'm interested to see what the model will pick, when facing different kinds of structures in the data. For example, will the model favor easy rules than more complicated rules?
In addition to the different variants of seq2seq models, I'm also interested to see whether encoder-decoder model have different biases with pure LMs (only decoder, e.g., GPT-2). |
ICLR | Title
A Technical and Normative Investigation of Social Bias Amplification
Abstract
The conversation around the fairness of machine learning models is growing and evolving. In this work, we focus on the issue of bias amplification: the tendency of models trained from data containing social biases to further amplify these biases. This problem is brought about by the algorithm, on top of the level of bias already present in the data. We make two main contributions regarding its measurement. First, building off of Zhao et al. (2017), we introduce and analyze a new, decoupled metric for measuring bias amplification, BiasAmp→, which possesses a number of attractive properties, including the ability to pinpoint the cause of bias amplification. Second, we thoroughly analyze and discuss the normative implications of this metric. We provide suggestions about its measurement by cautioning against predicting sensitive attributes, encouraging the use of confidence intervals due to fluctuations in the fairness of models across runs, and discussing what bias amplification means in the context of domains where labels either don’t exist at test time or correspond to uncertain future events. Throughout this paper, we work to provide a deeply interrogative look at the technical measurement of bias amplification, guided by our normative ideas of what we want it to encompass.
1 INTRODUCTION
The machine learning community is becoming increasingly cognizant of problems surrounding fairness and bias, and correspondingly a plethora of new algorithms and metrics are being proposed (see e.g., Mehrabi et al. (2019) for a review). The gatekeepers checking the systems to be deployed often take the form of fairness evaluation metrics, and it is vital that these be deeply investigated both technically and normatively. In this paper, we endeavor to do this for bias amplification. Bias amplification happens when a model exacerbates biases from the training data at test time. It is the result of the algorithm (Foulds et al., 2018), and unlike other forms of bias, cannot be solely attributed to the dataset.
To this end, we propose a new way of measuring bias amplification, BiasAmp→ 1, that builds off a prior metric from Men Also Like Shopping (Zhao et al., 2017), that we will call BiasAmpMALS. Our metric’s technical composition aligns with the real-world qualities we want it to encompass, addressing a number of the previous metric’s shortcomings by being able to: 1) generalize beyond binary attributes, 2) take into account the base rates that people of each attribute appear, and 3) disentangle the directions of amplification. Concretely, consider a visual dataset (Fig. 1) where each image has a label for the task, T , which is painting or not painting, and further is associated with a protected attribute, A, which is woman or man. If the gender of the person biases the prediction of the task, we consider this A→ T bias amplification; if the reverse happens, then T → A. In our normative discussion, we discuss a few topics. We consider whether predicting protected attributes is necessary in the first place; by not doing so, we can trivially remove T → A amplification. We also encourage the use of confidence intervals when using our metric because BiasAmp→, along with other fairness metrics, suffers from the Rashomon Effect (Breiman, 2001), or multiplicity of good models. In deep neural networks, random seeds have relatively little impact on accuracy; however, that is not the case for fairness, which is more brittle to randomness.
1The arrow in BiasAmp→ is meant to signify the direction that bias amplification is flowing, and not intended to be a claim about causality.
Notably, a trait of bias amplification is that it is not at odds with accuracy, unlike many other fairness metrics, because the goal of not amplifying biases and matching task-attribute correlations is aligned with that of accurate predictions. For example, imagine a dataset where the positive outcome is associated at a higher rate with group A than with group B. A classifier that achieves 100% accuracy at predicting the positive outcome is not amplifying bias; however, according to metrics like demographic parity, this perfect classifier is still perpetuating bias because it is predicting the positive label at different rates for both groups. While matching training correlations is desired in object detection where systems should perfectly predict the labels, we will explore the nuances of what this means in situations where the validity of the labels, and thus task-attribute correlations themselves, are up for debate. For example, in the risk prediction task which assesses someone’s likelihood of recidivism, the label represents whether someone with a set of input features ended up recidivating, but is not a steadfast indicator of what another person with the same input features will do. Here, we would not want to replicate the task-attribute correlations at test time, and it is important to keep this in mind when deciding what fairness metrics to apply. The notion of amplification also allows us to encapsulate the idea that systemic harms and biases can be more harmful than errors made without such a history (Bearman et al., 2009); for example, in images overclassifying women as cooking carries more of a negative connotation than overclassifying men as cooking.2 Distinguishing between which errors are more harmful than others is a pattern that can often be lifted from the training data.
To ground our work, we first distinguish what bias amplification captures that standard fairness metrics cannot, and then distinguish BiasAmp→ from BiasAmpMALS. Our key contributions are: 1) proposing a new way to measure bias amplification, addressing multiple shortcomings of prior work and allowing us to better diagnose where a model goes wrong, and 2) providing a technical analysis and normative discussion around the use of this measure in diverse settings, encouraging thoughtfulness with each application.
2 RELATED WORK
Fairness Measurements. Fairness is nebulous and context-dependent, and approaches to quantifying it (Verma & Rubin, 2018; Buolamwini & Gebru, 2018) include equalized odds (Hardt et al., 2016), equal opportunity (Hardt et al., 2016), demographic parity (Dwork et al., 2012; Kusner et al., 2017), fairness through awareness (Dwork et al., 2012; Kusner et al., 2017), fairness through unawareness (Grgic-Hlaca et al., 2016; Kusner et al., 2017), and treatment equality (Berk et al., 2017). We examine bias amplification, which is a type of group fairness where correlations are amplified.
Bias Amplification. Bias amplification has been measured by looking at binary classifications (Leino et al., 2019), GANs (Jain et al., 2020; Choi et al., 2020), and correlations (Zhao et al., 2017). Wang et al. (2019) measures this using dataset leakage and model leakage. The difference between these values is the level of bias amplification, but this is not a fair comparison because the
2We use the terms man and woman to refer to binarized socially-perceived gender expression, recognizing these labels are not inclusive, and in vision datasets are often assigned by annotators rather than self-disclosed.
attribute classifier gets discrete labels for the former but continuous model outputs for the latter. Jia et al. (2020) looks at output distributions like we do, but with a different formulation.
The Word Embedding Association Test (WEAT) (Caliskan et al., 2017) measures bias amplification in de-contextualized word embeddings, looking at correlations but not causations (Bolukbasi et al., 2016). However, with newer models like BERT and ELMo that have contextualized embeddings, WEAT does not work (May et al., 2019), so new techniques have been proposed incorporating context (Lu et al., 2019; Kuang & Davison, 2016). We use these models to measure the directional aspect of these amplifications, as well as to situate them in the broader world of bias amplification.
Directionality. Directionality of amplification has been observed in computer vision (Stock & Cisse, 2018) and language (Qian et al., 2019). It has also been studied with causality (Bhattacharya & Vogt, 2007; Wooldridge, 2016; Pearl, 2010; Middleton et al., 2016; Steiner & Yongnam, 2016). We take a deeper and more empirical approach.
Predictive Multiplicity. The Rashomon Effect (Breiman, 2001), or multiplicity of good models, has been studied in various contexts. The variables investigated that differ across good models include explanations (Hancox-Li, 2020), individual treatments (Marx et al., 2020; Pawelczyk et al., 2020), and variable importance (Fisher et al., 2019; Dong & Rudin, 2019). We build on these discoveries and investigate how fairness measurements also differ between equally “good” models.
3 EXISTING FAIRNESS METRICS
In this section we present existing fairness metrics and show how bias amplification can distinguish errors resulting from under- and overclassification in a way that others cannot, followed by a discussion of the shortcomings of BiasAmpMALS.
3.1 OVERVIEW OF EXISTING FAIRNESS METRICS
We begin with a review of existing fairness metrics in a concrete classification setting. We consider again the example from Fig. 1, where on this dataset women (a0) are correlated with painting (t0), and men (a1) with not painting (t1), such that there are N images each of (a0, t0) and (a1, t1) but only N/2 images of (a0, t1) and (a1, t0). A classifier trained to recognize painting on this data is likely to learn this association and over-predict painting on images of women and under-predict painting on images of men; however, algorithmic interventions may counteract this effect and in fact result in the opposite behavior.
Fig. 2 shows the behavior of fairness metrics under varying amounts of learned amplification of the correlation. The four fairness metrics are: False Positive Rate (FPR) and True Positive Rate (TPR) difference: the difference in false positive (true positive) rate of predicting the label t0 on images of a1 versus on images of a0 (Chouldechova, 2016; Hardt et al., 2016), accuracy difference: difference between the overall task prediction accuracy on images of a1 versus on images of a0 (Berk et al., 2017), and mean accuracy across subgroups: mean task prediction accuracy across the four image subgroups ((a0, t0), (a1, t0), (a0, t1), and (a1, t1)) (Buolamwini & Gebru, 2018). However, these metrics are not designed to account for the training correlations, and are unable to distinguish between cases of increased or decreased learned correlations, as seen in Fig. 2.
Zhao et al. (2017) introduced an alternative to these that explicitly captures the notion of bias amplification. Concretely, they consider P (A = a|T = t) of the training data as the fraction of times a protected attribute a ∈ A appears on images corresponding to task t ∈ T . They then compare this with the test-time predictions made by the model, P (Â = a|T̂ = t), or the number of times attribute a is predicted on images where the task is predicted as t, which allows them to measure bias amplification in the absence of any additional annotations on the hold-out set. Note that in this formulation they are assuming that the model is making predictions for both the task and the attribute. The full bias amplification metric (reformulated in our terms), is computed as BiasAmpMALS =
1
|T | |T |∑ t=1 |A|∑ a=1 1 ( P (Aa = 1|Tt = 1) > 1 |A| ) ︸ ︷︷ ︸
y(t,a)
( P (Âa = 1|T̂t = 1)− P (Aa = 1|Tt = 1) ) ︸ ︷︷ ︸
∆(t,a)
(1)
Fig. 2 empirically demonstrates that this metric is able to capture the level of increasing bias amplification. (For consistency in comparison with prior metrics, we assume the model always correctly predicts the protected attribute A.) However, as we discuss in the next section, there are some properties of bias amplification that this metric is not able to capture: for example, it does not distinguish between errors in predicting the protected attribute versus errors in predicting the task. Thus we introduce a new metric (last graph of Fig. 2) which maintains the desirable properties from Zhao et al. (2017) while including a number of innovations.
3.2 SHORTCOMINGS OF BIASAMPMALS
Despite the advantages we just documented about BiasAmpMALS (Eqn. 1) and its ability to distinguish under- and overclassifications of training correlations, this metric also suffers from a number of shortcomings. To ground our discussion, we will work directly with the model outputs released by Zhao et al. (2017) from their Conditional Random Field (CRF) model on COCO (Lin et al., 2014), which has predictions for gender and objects detected for each image.
3.2.1 NON-BINARY ATTRIBUTES
The first shortcoming is that the metric assumes that the protected attributes are binary, limiting its use: the indicator variable y(t, a) implicitly chooses only one of the attribute a ∈ A to be associated with every task t. Consider a task t0 ∈ T such that a0 ∈ A is associated with it, but none of the other ai ∈ A are, where i 6= 0. In this scenario, diff(t0, ai) is only considered when there is one other ai such that ai = ¬a, since diff(t, a) = −diff(t,¬a). A simple addition of −diff for all ai’s when y is 0 ensures that when there are more than two groups, their bias amplification is also counted.
3.2.2 BASE RATES
The second shortcoming of BiasAmpMALS is the fact that the metric does not take into account the base rates of each attribute. Concretely, when determining in y(t, a) of Eqn. 1 whether the attribute a is correlated with the task t, P (A = a|T = t) is compared to 1|A| . However, this assumes that all a’s within A are evenly distributed, which may not be the case. For example, in COCO there are about 2.5x as many men as women, so it would appear that most objects positively correlate with men simply by nature of there being an overrepresentation of men. Consider the object oven; BiasAmpMALS calculates P (A = man|T = oven) = 0.56 > 12 and thus considers this object to be correlated with men rather than women. However, computing P (A = man, T = oven) = 0.0103 < 0.0129 = P (A = man)P (T = oven) reveals that men are in fact not correlated with oven, and the seeming overrepresentation comes from the fact that men are overrepresented in the dataset more generally. Not surprisingly, the model trained on this data learns to associate women with ovens and underpredicts men with ovens at test time, i.e., P (Â = man|T̂ = oven)− P (A = man|T = oven) = −0.10. BiasAmpMALS erroneously counts this as inverse bias amplification.
3.2.3 ENTANGLING DIRECTIONS
Another shortcoming we observe is the inability to distinguish between different types of bias amplification. Zhao et al. (2017) discovers that “Technology oriented categories initially biased toward men such as keyboard ... have each increased their bias toward males by over 0.100.” Concretely, from Eqn. 1 P (A = man|T = keyboard) = .70 and P (Â = man|T̂ = keyboard) = .83, demonstrating an amplification of bias. However, the direction or cause of bias amplification remains unclear: is the presence of man in the image increasing the probability of predicting a keyboard, or vice versa? Looking more closely at the model’s disentangled predictions, we see that:
P (T̂ = keyboard|A = man) = 0.0020 < 0.0032 = P (T = keyboard|A = man) (2) P (Â = man|T = keyboard) = 0.78 > 0.70 = P (A = man|T = keyboard) (3)
indicating that keyboards are under-predicted on images with men yet men are over-predicted on images with keyboards. Thus the root cause of this amplification appears to be in the gender predictor rather than the object detector. Such disentangement allows us to properly focus algorithmic intervention efforts. This also highlights the need to consider the ground truth labels on the hold-out set when measuring bias amplification in addition to the predictions (since when considering only the predictions, it is impossible to decouple the different sources of bias).
3.3 THRESHOLD
We also fully replicate the original experiment from Zhao et al. (2017) using BiasAmpMALS on their model predictions and measure .040. However, we observe that “man” is being predicted at a higher rate (75.6%) than is actually present (71.2%). With this insight, we tune the decision threshold on the validation set such that the gender predictor is well-calibrated to be predicting the same percentage of images to have men as the dataset actually has. When we calculate BiasAmpMALS on these newly thresholded predictions for the test set, we see bias amplification drop from 0.040 to 0.001 just as a result of this threshold change, outperforming even the solution proposed in Zhao et al. (2017) of corpus-level constraints, which achieved a drop to only 0.021. Fairness can be quite sensitive to the threshold chosen (Chen & Wu, 2020), so careful selection should be done in picking the threshold, rather than using the default of .5. In Fig. 3 we show how the amount of bias amplification, as measured by BiasAmpMALS and BiasAmpT→A, changes as we vary the threshold, i.e., proportion of people classified to be a man. We can see that when the threshold is chosen to be the one wellcalibrated on the validation set rather than the default threshold, bias amplification is measured to be closer to zero for both metrics. From here on out when a threshold is needed, we will pick it to be well-calibrated on the validation set. Although we do not take this approach, one could also imagine integrating bias amplification across proportion in order to have a threshold-agnostic measure of bias amplification, similar to what is proposed by Chen & Wu (2020). We do not do this in our experiments because at deployment time, it is often the case that discrete predictions are required.
4 BIASAMP→
Now we present our metric, BiasAmp→, which retains the desirable properties of BiasAmpMALS, while addressing the shortcomings noted in the previous section. To account for the need to disentangle the two possible directions of bias amplification (Sec. 3.2.3) the metric consists of two values, BiasAmpA→T corresponding to the amplification of bias resulting from the protected attribute influencing the task prediction, and BiasAmpT→A, corresponding to the amplification of bias resulting from the task influencing the protected attribute prediction. Concretely, the metric is defined as:
BiasAmp→ = 1 |A||T | ∑
t∈T ,a∈A y(t, a)∆(t, a) + (1− y(t, a))(−∆(t, a)) (4)
where y(t, a) = 1 [P (T = t, A = a) > P (T = t)P (A = a)] (5)
∆(t, a) = { P (T̂ = t|A = a)− P (T = t|A = a) if measuring A→ T P (Â = a|T = t)− P (A = a|T = t) if measuring T → A
(6)
Eqn. 4 generalizes BiasAmpMALS to measure the amplification of both positive and negative correlations between task t and attribute a, depending on y(t, a), when the attributes are non-binary,
as discussed in Sec. 3.2.1. Eqn. 5 identifies the direction of correlation of attribute a with task t, properly accounting for base rates as described in Sec. 3.2.2. Finally, Eqn. 6 decouples the two possible directions of bias amplification as in Sec. 3.2.3. Since values may be negative, reporting the aggregated bias amplification value could obscure task-attribute pairs that exhibit strong bias amplification; thus, disaggregated results per pair can be returned for a more detailed diagnosis.
Although in much of the examples we have and will look at, A = {woman,man}, this formulation allows for any attribute set to be defined, including intersectional identities. This is achieved by having A encompass the cross-product of possible attributes, for example A = {Black woman,Black man,white woman,white man}. We introduce a scenario for validating the decoupling aspect of our metric, that simultaneously serves as inspiration for an intervention approach to mitigating bias amplification. We use a baseline amplification removal idea of applying segmentation masks (noisy or full) over the people in an image to mitigate bias stemming from human attributes (Wang et al., 2019). We train on the COCO classification task a VGG16 (Simonyan & Zisserman, 2014) model pretrained on ImageNet (Russakovsky et al., 2015) to predict objects and gender, with a Binary Cross Entropy Loss over all outputs, and measure BiasAmpT→A and BiasAmpA→T ; we report 95% confidence intervals for 5 runs of each scenario. In Fig. 4 we see, as expected, that as less of the person is visible, A→T decreases because there are less human attribute visual cues to bias the task prediction. On the other hand, T→A increases because the model must lean into task biases to predict the person’s attribute. However, we can also see from the overlapping confidence intervals that this technique of bias amplification mitigation does not appear to be particularly robust; we continue a discussion of this phenomenon in Sec. 5.2. Further mitigation techniques are outside of our scope, but we look to works like Singh et al. (2020); Wang et al. (2019); Agarwal et al. (2020).
5 ANALYSIS AND DISCUSSION
We now discuss some of the normative issues surrounding bias amplification, starting in Sec. 5.1 with the existence of T→A bias amplification, which implies the prediction of sensitive attributes; in Sec. 5.2 about the need for confidence intervals to make robust conclusions; and in Sec. 5.3 about scenarios in which the original formulation of bias amplification as a desire to match base correlations may not be what is actually wanted.
5.1 T→ A BIAS AMPLIFICATION
If we think more deeply about these bias amplifications, we might come to a normative conclusion that T → A, which measures sensitive attribute predictions conditioned on the tasks, should not exist in the first place. There are very few situations in which predicting sensitive attributes makes sense (Scheuerman et al., 2020; Larson, 2017), so we should carefully consider if this is strictly necessary for target applications. For the image domains discussed, by simply removing the notion of predicting gender, we trivially remove all T → A bias amplification. In a similar vein, there has been great work done on reducing gender bias in image captions (Hendricks et al., 2018; Tang et al., 2020), but it is often focused on targeting T → A amplification rather than A → T . When
disentangling the directions of bias, we find that the Equalizer model (Hendricks et al., 2018), which was trained with the intention of increasing the quality of gender-specific words in captions, inadvertently increases A → T bias amplification for certain tasks. In Fig. 5 we see examples where the content of the Equalizer’s caption exhibits bias coming from the person’s attribute. Even though the Equalizer model reduces T → A bias amplification in these images, it inadvertently increases A→ T . It is important to disentangle the two directions of bias and notice that while one direction is becoming more fair, another is actually becoming more biased. Although this may not always be the case, depending on the downstream application, perhaps this is a setting in which we could consider simply replacing all instances of gendered words like “man” and “woman” in the captions with “person” to trivially eliminate T → A, and focus on A → T bias amplification. Specifically when gender is the sensitive attribute, Keyes (2018) thoroughly explains how we should carefully think about why we might implement Automatic Gender Recognition (AGR), and avoid doing so.
On the other hand, sensitive attribute labels, ideally from self-disclosure, can be very useful. For example, these labels are necessary to measureA→ T amplification, which is important to discover, as we do not want our prediction task to be biased for or against people with certain attributes.
5.2 LACK OF CONSISTENCY IN BIAS MEASUREMENT
Evaluation metrics, ours’ included, are specific to each model on each dataset. Under common loss functions such as Cross Entropy Loss, some evaluation metrics, like average precision, are not very sensitive to random seed. However, we noticed that bias amplification, along with other fairness metrics like FPR difference, often fluctuates greatly across runs. Because the loss functions that machine learning practitioners tend to default to using are proxies for accuracy, it makes sense that the various local minima, while equal in accuracy, are not necessarily equal in terms of other measurements. The phenomena of differences between equally predictive models has been termed the Rashomon Effect (Breiman, 2001), or predictive multiplicity (Marx et al., 2020).
Thus, like previous work (Fisher et al., 2019), we urge transparency, and advocate for the inclusion of confidence intervals. To illustrate the need for this, we look at the facial image domain of CelebA (Liu et al., 2015), defining the two tasks of classifying “big nose” and “young” as our T , and treating the gender labels as our attribute, A. Note that we are not going to classify gender, for reasons raised in Sec. 5.1, so we only measure A → T amplification. For these tasks, women are more correlated with no big nose and being young, and men with big nose and not being young. We examine two different scenarios, one where our independent variable is model architecture, and another where it is the ratio between number of images of the majority groups (e.g., young women and not young men) and minority groups (e.g., not young women and young men). By looking at the confidence intervals, we can determine which condition allows us to draw reliable conclusions about the impact of the variable on bias amplification.
For model architecture, we train 3 models pretrained on ImageNet (Russakovsky et al., 2015) across 5 runs: ResNet18 (He et al., 2016), AlexNet (Krizhevsky et al., 2012), and VGG16 (Simonyan & Zisserman, 2014). Training details are in Appendix A.3. In Fig. 6 we see from the confidence intervals that while model architecture does not result in differing enough of bias amplification to conclude anything about the relative fairness of these models, across-ratio differences are significant enough to draw conclusions about the impact of this ratio on bias amplification. We encourage researchers to include confidence intervals so that findings are more robust to random fluctuations.
5.3 CONSIDERATIONS FOR UNCERTAIN PREDICTION PROBLEMS
A property of bias amplification is that it does not conflict with having perfect accuracy. However, in turn, such a model with perfect accuracy would exactly replicate the correlations present in the training data. In this section we will discuss two cases that challenge this desire for matching training correlations: in the first, there will be no ground-truth labels from which to lift these correlations, and in the second, our goal is actually to not match the training correlations.
No Ground Truth. When we don’t have labels to derive training correlation rates from, bias amplification becomes harder to measure. Consider the fill-in-the-blank NLP task, where there is no ground-truth for how to fill in a sentence. Given “The [blank] went on a walk”, a variety of words could be equally suitable. Therefore, to measure bias amplification in this setting, we can manually set the base correlations, e.g., P (T = t|A = a), P (A = a|T = t). To see the effect that adjusting base correlations has, we test the bias amplification between occupations and gender pronouns, conditioning on the pronoun and filling in the occupation and vice versa. In Table 1, we report our measured bias amplification results on the FitBERT (Fill in the blanks BERT) (Havens & Stal, 2019; Devlin et al., 2019) model using various sources as our base correlation of bias from which amplification is measured. The same outputs from the model are used for each set of pronouns, and the independent variable we manipulate is the source of: 1) equality amongst the pronouns (using 2 and 3 pronouns), 2) co-occurrence counts from English Wikipedia (one of the datasets BERT was trained on), and 3) WinoBias (Zhao et al., 2018) with additional information supplemented from the 2016 U.S. Labor Force Statistics data. Details are in Appendix A.4. It is interesting to note that relative to U.S. Labor Force data on these particular occupations, FitBERT actually exhibits no bias amplification. For the occupation of hairdresser, the Labor statistics are biased at 92% women while FitBERT is at 80%, reflecting in fact a reduction in bias amplification. This demonstrates the importance of setting appropriate base correlations, because picking one that already exhibits strong amounts of bias will only flag models that further amplify this. In the next section we discuss another manifestation of this phenomenon, where the training correlation itself would be misleading to compare to, because of the strong amount of bias it contains.
Future Outcome Prediction. Next, we examine the risk prediction setting, where matching the base correlations may not be our desired goal. The labels here do not represent an objective ground truth because they: 1) suffer from problems like historical and selection bias (Suresh & Guttag, 2019; Olteanu et al., 2019; Green, 2020), and 2) will be used for prediction of a future event for which no one knows the answer. We will put aside the significant problems stemming from the first point for this current discussion, and focus on the latter. In this section, when using the word “predict” we will revert from the machine learning meaning and adopt the colloquial sense of the word, as in the forecast of a future event. What we had called object prediction, we will now call object labeling.
Consider bias amplification in the context of a domain like risk prediction relative to previous domains looked at, such as object detection. The difference is not in tabular versus vision, but rather in prediction versus labeling. The notion of “ground-truth” doesn’t quite exist the way we might think about it, because given the input features that define a particular person, one could imagine an individual with these features who does recidivate, and one who does not (Barocas et al., 2019). The training label is just how someone with these input features once acted, but is not necessarily a rigid indicator of how someone else with these features will act. On the other hand, in object labeling the labels are very much the ground-truth, and thus bias amplification is a reasonable metric to gauge fairness by. However, we do not know this for risk prediction, and thus, matching the training correlations should not be our intended goal (Wick et al., 2019). In Fig. 7 we show the metrics
we can see from the continued existence of a difference in FPR’s between the two groups, there is no threshold that could be picked at which this disparity does not exist (except at the trivial cases of a 0 or 10 threshold), even if there does exist thresholds where bias is not being amplified.
For each application, different metrics of fairness are more or less applicable, and BiasAmp→ is no different. It is crucial that we think thoughtfully when deciding how to evaluate a model. In previous applications we imagined the direction of systemic bias to be captured by the training data and thus we lifted base correlations of bias from there. However, one could imagine a similarly manual intervention as was done for NLP on other tasks like risk prediction, where a domain expert verifies the direction of bias determined by the training set, and even sets the base correlations.
6 CONCLUSION
In this paper, we take a deep dive into the measure of bias amplification. We introduce a new metric, BiasAmp→, and through the use of this metric and its directional components, diagnosing models will provide more actionable insights. Additionally, we discuss normative considerations, such as thinking carefully about why we might be performing sensitive attribute prediction, incorporating confidence intervals as the norm when reporting fairness metrics, and exercising care when determining which fairness metrics are applicable, and what assumptions they are encoding.
A APPENDIX
A.1 ADDITIONAL METRIC DETAILS
We provide additional details here about BiasAmp→, as defined in Sec. 4.
In practice the indicator variable, y(t, a), is computed over the statistics of the training set, whereas everything else is computed over the test set. The reason behind this, is that the direction of bias is determined by the existing biases in the training set. Additionally, for computing the integration across all thresholds, we sparsely sample from all of the probabilities outputted to compute an approximation.
Comparisons of the values outputted by BiasAmp→ should only be done relatively. In particular, within one of the directions at a time, either A → T or T → A, on one dataset. In particular, comparing A → T to T → A directly is not a signal as to which direction of amplification is stronger.
A.2 WALKING THROUGH THE EQUATIONS FOR FIGURE 1 SCENARIO
In this section we concretely write out the equations for BiasAmpMALS and BiasAmp→ for the scenario shown in Fig. 1, to better clarify what each metric captures. As a reminder, in this scenario A = {a0, a1} and T = {t0, t1}, and a0 is correlated with t0, and a1 with t1.
BiasAmpMALS = (7)
1
2 2∑ t=1 2∑ a=1 1 ( P (Aa = 1|Tt = 1) > 1 2 )( P (Âa = 1|T̂t = 1)− P (Aa = 1|Tt = 1) ) (8)
= 1
2
[ 1× ( P (Â0 = 1|T̂0 = 1)− P (A0 = 1|T0 = 1) ) + (9)
0× ( P (Â1 = 1|T̂0 = 1)− P (A1 = 1|T0 = 1) ) + (10)
0× ( P (Â0 = 1|T̂1 = 1)− P (A0 = 1|T1 = 1) ) + (11)
1× ( P (Â1 = 1|T̂1 = 1)− P (A1 = 1|T1 = 1) )] (12)
= 1
2
[( P (Â0 = 1|T̂0 = 1)− P (A0 = 1|T0 = 1) ) (13)
+ ( P (Â1 = 1|T̂1 = 1)− P (A1 = 1|T1 = 1) )] (14)
For BiasAmp→, our equation simplifies in the case of discrete predictions as follows:
BiasAmpA→T = 1
4
[( P (T̂0 = 1|A0 = 1)− P (T0 = 1|A0 = 1) ) (15)
− ( P (T̂0 = 1|A1 = 1)− P (T0 = 1|A1 = 1) ) (16)
− ( P (T̂1 = 1|A0 = 1)− P (T1 = 1|A0 = 1) ) (17)
+ ( P (T̂1 = 1|A1 = 1)− P (T1 = 1|A1 = 1) )] (18)
(19)
BiasAmpT→A = 1
4
[( P (Â0 = 1|T0 = 1)− P (A0 = 1|T0 = 1) ) (20)
− ( P (Â0 = 1|T1 = 1)− P (A0 = 1|T1 = 1) ) (21)
− ( P (Â1 = 1|T0 = 1)− P (A1 = 1|T0 = 1) ) (22)
+ ( P (Â1 = 1|T1 = 1)− P (A1 = 1|T1 = 1) )] (23)
(24)
A.3 DETAILS AND EXPERIMENT FROM LACK OF CONSISTENCY IN BIAS
For the models we trained in Sec. 5.2, we performed hyperparameter tuning on the validation set, and ended up using the following: ResNet18 had a learning rate of .0001, AlexNet of .0003, and VGG16 of .00014. All models were trained with stochastic gradient descent, a batch size of 64, and 10 epochs.
A.4 DETAILS ON MEASURING BIAS AMPLIFICATION IN FITBERT
Here we provide additional details behind the numbers presented in Tbl. 1 in Sec. 5.3.
As noted, and done, by Liang et al. (2020), a large and diverse corpus of sentences is needed to sample from the large variety of contexts. However, that is out of scope for this work, where we simply use 2 sentences: “[he/she/(they)] is a(n) [occupation]” or “[he/she/(they)] was a(n) [occupation]” to test.
When calculating the amount of bias amplification when the base rates are equal, we picked the direction of bias based on that provided by the WinoBias dataset. In practice, this can be thought of as setting the base correlation, P (A = a|T = t) for a men-biased job like “cook” to be .5 + for “he” and .5 − for “she” when there are two pronouns, and .33 + for “he” and .33 − for “she” and “they”, where in practice we used = 1e−7. This ensures that the indicator variable, y(t, a) from Eq. 5, is set in the direction fo the gender bias, but the magnitudes of ∆(t, a) from Eq. 6 are not affected to a significant degree.
To generate a rough approximation of what training correlation rates could look like in this domain, we look to one of the datasets that BERT was trained on, the Wikipedia dataset. We do so by simply counting the cooccurrences of all the occupations along with gendered words such as “man”, “he”, “him”, etc. There are flaws with this approach because in a sentence like “She went to see the doctor.”, the pronoun is in fact not referring to the gender of the person with the occupation. However, we leave a more accurate measurement of this to future work, as our aim for showing these results was more for demonstrative purposes illustrating the manipulation of the correlation rate, rather than in rigorously measuring the training correlation rate.
We use 32 rather than 40 occupations in WinoBias Zhao et al. (2018), because when we went to the 2016 U.S. Labor Force Statistics data (of Labor Statistics, 2016) to collect the actual numbers of each gender and occupation in order to be able to calculate P (T = t|A = a), since WinoBias only had P (A = a|T = t), we found 8 occupations to be too ambiguous to be able to determine the actual numbers. For example, for “attendant”, there were many different attendant jobs listed, such as “flight attendants” and “parking lot attendant”, so we opted rather to drop these jobs from the list of 40. The 8 from the original WinoBias dataset that we ignored are: supervisor, manager, mechanician, CEO, teacher, assistant, clerk, and attendant. The first four are biased towards men, and the latter four towards women, so that we did not skew the distribution of jobs biased towards each gender. | 1. What are the strengths and weaknesses of the paper regarding its contributions?
2. How does the reviewer assess the BiasAmp metric, and what are their suggestions for improving it?
3. What are the limitations of the discussion on the dependence of bias measurements on randomness?
4. How does the reviewer evaluate the discussion on the usage of error bars and the consistency of different metrics?
5. What are the reviewer's concerns regarding the robustness of the BiasAmp metric, and how could it be addressed?
6. How does the reviewer view the normative discussion on the use of bias amplification ideas in different domains, and what are their suggestions for improvement?
7. Does the reviewer believe the paper should be accepted, and what changes would make it a more complete and important contribution? | Review | Review
The paper builds on the "bias amplification" aspect of fairness in machine learning literature i.e. the tendency of models to make predictions that are biased in a way that they amplify societal correlations. The paper claims three major contributions: a metric, discussion about the dependence of bias measurements on randomness, and a normative discussion about the use of bias amplification ideas in different domains.
Overall I find the metric as the only major contribution of the paper, and below I will explain why.
The BiasAmp metric makes a significant contribution in terms of fixing the drawbacks of the previously proposed metric from Zhao et al. 2017(BiasAmp_MALS). It would be more effective if the work also included a study such as Zhao et al demonstrating how to mitigate the bias as measured by the BiasAmp measure.
The discussion around--the usage of error bars because of the Rashomon effect seems incomplete and almost trivial. First, in my opinion, it is indeed necessary to have error bars if there are metrics whose measurements vary across different runs--regardless of whether that measure was being optimized on or not. But a more nuanced idea would be either: (a) usually a fairness metric is used as a guidance of whether a specific model is deployable or not, rather than being a property of a training method unless however, an ensemble of a number models trained by the same method is being used in deployment. (b) Fairness metrics may be used as a model selection criterion: since the fairness metric is not being optimized upon but a proxy of accuracy, it just means that, out of the models with equally high accuracy, we need to choose the model with the least bias (as measured by a trustworthy metric).
I find the discussion around the 'consistency' of different metrics incomplete, where average precision (AP) seems to rank models consistently while BiasAmp and others don't. BiasAmp is a pretty sophisticated metric and because of the use of conditional probabilities of all kinds prone to pitfalls such as Simpson's paradox when measuring bias. I would be curious if it were robustly tested e.g. another experiment such as Fig 4 where the authors control the amount of the bias by tuning at the source of the bias.
The discussion around the use of bias amplification in terms of prediction problems where the outcome is chance-based is a little confusing and it does not provide a fresher perspective of discussion already in the fairness literature around understanding the significance of inherent 'uncertainty' or 'randomness' in application domains other than vision (and probably language). For example, the Fair ML book (Barocas, Hardt, and Narayanan 2017) does mention uncertainty in the context of such applications (on page 34, 56).
Overall, I am not very convinced that the paper should be accepted unless fellow reviewers think strongly otherwise. However, I think the paper has the potential to be a more complete and important contribution with a more comprehensive study around the technical contributions and clearer discussion about the normative contributions. |
ICLR | Title
A Technical and Normative Investigation of Social Bias Amplification
Abstract
The conversation around the fairness of machine learning models is growing and evolving. In this work, we focus on the issue of bias amplification: the tendency of models trained from data containing social biases to further amplify these biases. This problem is brought about by the algorithm, on top of the level of bias already present in the data. We make two main contributions regarding its measurement. First, building off of Zhao et al. (2017), we introduce and analyze a new, decoupled metric for measuring bias amplification, BiasAmp→, which possesses a number of attractive properties, including the ability to pinpoint the cause of bias amplification. Second, we thoroughly analyze and discuss the normative implications of this metric. We provide suggestions about its measurement by cautioning against predicting sensitive attributes, encouraging the use of confidence intervals due to fluctuations in the fairness of models across runs, and discussing what bias amplification means in the context of domains where labels either don’t exist at test time or correspond to uncertain future events. Throughout this paper, we work to provide a deeply interrogative look at the technical measurement of bias amplification, guided by our normative ideas of what we want it to encompass.
1 INTRODUCTION
The machine learning community is becoming increasingly cognizant of problems surrounding fairness and bias, and correspondingly a plethora of new algorithms and metrics are being proposed (see e.g., Mehrabi et al. (2019) for a review). The gatekeepers checking the systems to be deployed often take the form of fairness evaluation metrics, and it is vital that these be deeply investigated both technically and normatively. In this paper, we endeavor to do this for bias amplification. Bias amplification happens when a model exacerbates biases from the training data at test time. It is the result of the algorithm (Foulds et al., 2018), and unlike other forms of bias, cannot be solely attributed to the dataset.
To this end, we propose a new way of measuring bias amplification, BiasAmp→ 1, that builds off a prior metric from Men Also Like Shopping (Zhao et al., 2017), that we will call BiasAmpMALS. Our metric’s technical composition aligns with the real-world qualities we want it to encompass, addressing a number of the previous metric’s shortcomings by being able to: 1) generalize beyond binary attributes, 2) take into account the base rates that people of each attribute appear, and 3) disentangle the directions of amplification. Concretely, consider a visual dataset (Fig. 1) where each image has a label for the task, T , which is painting or not painting, and further is associated with a protected attribute, A, which is woman or man. If the gender of the person biases the prediction of the task, we consider this A→ T bias amplification; if the reverse happens, then T → A. In our normative discussion, we discuss a few topics. We consider whether predicting protected attributes is necessary in the first place; by not doing so, we can trivially remove T → A amplification. We also encourage the use of confidence intervals when using our metric because BiasAmp→, along with other fairness metrics, suffers from the Rashomon Effect (Breiman, 2001), or multiplicity of good models. In deep neural networks, random seeds have relatively little impact on accuracy; however, that is not the case for fairness, which is more brittle to randomness.
1The arrow in BiasAmp→ is meant to signify the direction that bias amplification is flowing, and not intended to be a claim about causality.
Notably, a trait of bias amplification is that it is not at odds with accuracy, unlike many other fairness metrics, because the goal of not amplifying biases and matching task-attribute correlations is aligned with that of accurate predictions. For example, imagine a dataset where the positive outcome is associated at a higher rate with group A than with group B. A classifier that achieves 100% accuracy at predicting the positive outcome is not amplifying bias; however, according to metrics like demographic parity, this perfect classifier is still perpetuating bias because it is predicting the positive label at different rates for both groups. While matching training correlations is desired in object detection where systems should perfectly predict the labels, we will explore the nuances of what this means in situations where the validity of the labels, and thus task-attribute correlations themselves, are up for debate. For example, in the risk prediction task which assesses someone’s likelihood of recidivism, the label represents whether someone with a set of input features ended up recidivating, but is not a steadfast indicator of what another person with the same input features will do. Here, we would not want to replicate the task-attribute correlations at test time, and it is important to keep this in mind when deciding what fairness metrics to apply. The notion of amplification also allows us to encapsulate the idea that systemic harms and biases can be more harmful than errors made without such a history (Bearman et al., 2009); for example, in images overclassifying women as cooking carries more of a negative connotation than overclassifying men as cooking.2 Distinguishing between which errors are more harmful than others is a pattern that can often be lifted from the training data.
To ground our work, we first distinguish what bias amplification captures that standard fairness metrics cannot, and then distinguish BiasAmp→ from BiasAmpMALS. Our key contributions are: 1) proposing a new way to measure bias amplification, addressing multiple shortcomings of prior work and allowing us to better diagnose where a model goes wrong, and 2) providing a technical analysis and normative discussion around the use of this measure in diverse settings, encouraging thoughtfulness with each application.
2 RELATED WORK
Fairness Measurements. Fairness is nebulous and context-dependent, and approaches to quantifying it (Verma & Rubin, 2018; Buolamwini & Gebru, 2018) include equalized odds (Hardt et al., 2016), equal opportunity (Hardt et al., 2016), demographic parity (Dwork et al., 2012; Kusner et al., 2017), fairness through awareness (Dwork et al., 2012; Kusner et al., 2017), fairness through unawareness (Grgic-Hlaca et al., 2016; Kusner et al., 2017), and treatment equality (Berk et al., 2017). We examine bias amplification, which is a type of group fairness where correlations are amplified.
Bias Amplification. Bias amplification has been measured by looking at binary classifications (Leino et al., 2019), GANs (Jain et al., 2020; Choi et al., 2020), and correlations (Zhao et al., 2017). Wang et al. (2019) measures this using dataset leakage and model leakage. The difference between these values is the level of bias amplification, but this is not a fair comparison because the
2We use the terms man and woman to refer to binarized socially-perceived gender expression, recognizing these labels are not inclusive, and in vision datasets are often assigned by annotators rather than self-disclosed.
attribute classifier gets discrete labels for the former but continuous model outputs for the latter. Jia et al. (2020) looks at output distributions like we do, but with a different formulation.
The Word Embedding Association Test (WEAT) (Caliskan et al., 2017) measures bias amplification in de-contextualized word embeddings, looking at correlations but not causations (Bolukbasi et al., 2016). However, with newer models like BERT and ELMo that have contextualized embeddings, WEAT does not work (May et al., 2019), so new techniques have been proposed incorporating context (Lu et al., 2019; Kuang & Davison, 2016). We use these models to measure the directional aspect of these amplifications, as well as to situate them in the broader world of bias amplification.
Directionality. Directionality of amplification has been observed in computer vision (Stock & Cisse, 2018) and language (Qian et al., 2019). It has also been studied with causality (Bhattacharya & Vogt, 2007; Wooldridge, 2016; Pearl, 2010; Middleton et al., 2016; Steiner & Yongnam, 2016). We take a deeper and more empirical approach.
Predictive Multiplicity. The Rashomon Effect (Breiman, 2001), or multiplicity of good models, has been studied in various contexts. The variables investigated that differ across good models include explanations (Hancox-Li, 2020), individual treatments (Marx et al., 2020; Pawelczyk et al., 2020), and variable importance (Fisher et al., 2019; Dong & Rudin, 2019). We build on these discoveries and investigate how fairness measurements also differ between equally “good” models.
3 EXISTING FAIRNESS METRICS
In this section we present existing fairness metrics and show how bias amplification can distinguish errors resulting from under- and overclassification in a way that others cannot, followed by a discussion of the shortcomings of BiasAmpMALS.
3.1 OVERVIEW OF EXISTING FAIRNESS METRICS
We begin with a review of existing fairness metrics in a concrete classification setting. We consider again the example from Fig. 1, where on this dataset women (a0) are correlated with painting (t0), and men (a1) with not painting (t1), such that there are N images each of (a0, t0) and (a1, t1) but only N/2 images of (a0, t1) and (a1, t0). A classifier trained to recognize painting on this data is likely to learn this association and over-predict painting on images of women and under-predict painting on images of men; however, algorithmic interventions may counteract this effect and in fact result in the opposite behavior.
Fig. 2 shows the behavior of fairness metrics under varying amounts of learned amplification of the correlation. The four fairness metrics are: False Positive Rate (FPR) and True Positive Rate (TPR) difference: the difference in false positive (true positive) rate of predicting the label t0 on images of a1 versus on images of a0 (Chouldechova, 2016; Hardt et al., 2016), accuracy difference: difference between the overall task prediction accuracy on images of a1 versus on images of a0 (Berk et al., 2017), and mean accuracy across subgroups: mean task prediction accuracy across the four image subgroups ((a0, t0), (a1, t0), (a0, t1), and (a1, t1)) (Buolamwini & Gebru, 2018). However, these metrics are not designed to account for the training correlations, and are unable to distinguish between cases of increased or decreased learned correlations, as seen in Fig. 2.
Zhao et al. (2017) introduced an alternative to these that explicitly captures the notion of bias amplification. Concretely, they consider P (A = a|T = t) of the training data as the fraction of times a protected attribute a ∈ A appears on images corresponding to task t ∈ T . They then compare this with the test-time predictions made by the model, P (Â = a|T̂ = t), or the number of times attribute a is predicted on images where the task is predicted as t, which allows them to measure bias amplification in the absence of any additional annotations on the hold-out set. Note that in this formulation they are assuming that the model is making predictions for both the task and the attribute. The full bias amplification metric (reformulated in our terms), is computed as BiasAmpMALS =
1
|T | |T |∑ t=1 |A|∑ a=1 1 ( P (Aa = 1|Tt = 1) > 1 |A| ) ︸ ︷︷ ︸
y(t,a)
( P (Âa = 1|T̂t = 1)− P (Aa = 1|Tt = 1) ) ︸ ︷︷ ︸
∆(t,a)
(1)
Fig. 2 empirically demonstrates that this metric is able to capture the level of increasing bias amplification. (For consistency in comparison with prior metrics, we assume the model always correctly predicts the protected attribute A.) However, as we discuss in the next section, there are some properties of bias amplification that this metric is not able to capture: for example, it does not distinguish between errors in predicting the protected attribute versus errors in predicting the task. Thus we introduce a new metric (last graph of Fig. 2) which maintains the desirable properties from Zhao et al. (2017) while including a number of innovations.
3.2 SHORTCOMINGS OF BIASAMPMALS
Despite the advantages we just documented about BiasAmpMALS (Eqn. 1) and its ability to distinguish under- and overclassifications of training correlations, this metric also suffers from a number of shortcomings. To ground our discussion, we will work directly with the model outputs released by Zhao et al. (2017) from their Conditional Random Field (CRF) model on COCO (Lin et al., 2014), which has predictions for gender and objects detected for each image.
3.2.1 NON-BINARY ATTRIBUTES
The first shortcoming is that the metric assumes that the protected attributes are binary, limiting its use: the indicator variable y(t, a) implicitly chooses only one of the attribute a ∈ A to be associated with every task t. Consider a task t0 ∈ T such that a0 ∈ A is associated with it, but none of the other ai ∈ A are, where i 6= 0. In this scenario, diff(t0, ai) is only considered when there is one other ai such that ai = ¬a, since diff(t, a) = −diff(t,¬a). A simple addition of −diff for all ai’s when y is 0 ensures that when there are more than two groups, their bias amplification is also counted.
3.2.2 BASE RATES
The second shortcoming of BiasAmpMALS is the fact that the metric does not take into account the base rates of each attribute. Concretely, when determining in y(t, a) of Eqn. 1 whether the attribute a is correlated with the task t, P (A = a|T = t) is compared to 1|A| . However, this assumes that all a’s within A are evenly distributed, which may not be the case. For example, in COCO there are about 2.5x as many men as women, so it would appear that most objects positively correlate with men simply by nature of there being an overrepresentation of men. Consider the object oven; BiasAmpMALS calculates P (A = man|T = oven) = 0.56 > 12 and thus considers this object to be correlated with men rather than women. However, computing P (A = man, T = oven) = 0.0103 < 0.0129 = P (A = man)P (T = oven) reveals that men are in fact not correlated with oven, and the seeming overrepresentation comes from the fact that men are overrepresented in the dataset more generally. Not surprisingly, the model trained on this data learns to associate women with ovens and underpredicts men with ovens at test time, i.e., P (Â = man|T̂ = oven)− P (A = man|T = oven) = −0.10. BiasAmpMALS erroneously counts this as inverse bias amplification.
3.2.3 ENTANGLING DIRECTIONS
Another shortcoming we observe is the inability to distinguish between different types of bias amplification. Zhao et al. (2017) discovers that “Technology oriented categories initially biased toward men such as keyboard ... have each increased their bias toward males by over 0.100.” Concretely, from Eqn. 1 P (A = man|T = keyboard) = .70 and P (Â = man|T̂ = keyboard) = .83, demonstrating an amplification of bias. However, the direction or cause of bias amplification remains unclear: is the presence of man in the image increasing the probability of predicting a keyboard, or vice versa? Looking more closely at the model’s disentangled predictions, we see that:
P (T̂ = keyboard|A = man) = 0.0020 < 0.0032 = P (T = keyboard|A = man) (2) P (Â = man|T = keyboard) = 0.78 > 0.70 = P (A = man|T = keyboard) (3)
indicating that keyboards are under-predicted on images with men yet men are over-predicted on images with keyboards. Thus the root cause of this amplification appears to be in the gender predictor rather than the object detector. Such disentangement allows us to properly focus algorithmic intervention efforts. This also highlights the need to consider the ground truth labels on the hold-out set when measuring bias amplification in addition to the predictions (since when considering only the predictions, it is impossible to decouple the different sources of bias).
3.3 THRESHOLD
We also fully replicate the original experiment from Zhao et al. (2017) using BiasAmpMALS on their model predictions and measure .040. However, we observe that “man” is being predicted at a higher rate (75.6%) than is actually present (71.2%). With this insight, we tune the decision threshold on the validation set such that the gender predictor is well-calibrated to be predicting the same percentage of images to have men as the dataset actually has. When we calculate BiasAmpMALS on these newly thresholded predictions for the test set, we see bias amplification drop from 0.040 to 0.001 just as a result of this threshold change, outperforming even the solution proposed in Zhao et al. (2017) of corpus-level constraints, which achieved a drop to only 0.021. Fairness can be quite sensitive to the threshold chosen (Chen & Wu, 2020), so careful selection should be done in picking the threshold, rather than using the default of .5. In Fig. 3 we show how the amount of bias amplification, as measured by BiasAmpMALS and BiasAmpT→A, changes as we vary the threshold, i.e., proportion of people classified to be a man. We can see that when the threshold is chosen to be the one wellcalibrated on the validation set rather than the default threshold, bias amplification is measured to be closer to zero for both metrics. From here on out when a threshold is needed, we will pick it to be well-calibrated on the validation set. Although we do not take this approach, one could also imagine integrating bias amplification across proportion in order to have a threshold-agnostic measure of bias amplification, similar to what is proposed by Chen & Wu (2020). We do not do this in our experiments because at deployment time, it is often the case that discrete predictions are required.
4 BIASAMP→
Now we present our metric, BiasAmp→, which retains the desirable properties of BiasAmpMALS, while addressing the shortcomings noted in the previous section. To account for the need to disentangle the two possible directions of bias amplification (Sec. 3.2.3) the metric consists of two values, BiasAmpA→T corresponding to the amplification of bias resulting from the protected attribute influencing the task prediction, and BiasAmpT→A, corresponding to the amplification of bias resulting from the task influencing the protected attribute prediction. Concretely, the metric is defined as:
BiasAmp→ = 1 |A||T | ∑
t∈T ,a∈A y(t, a)∆(t, a) + (1− y(t, a))(−∆(t, a)) (4)
where y(t, a) = 1 [P (T = t, A = a) > P (T = t)P (A = a)] (5)
∆(t, a) = { P (T̂ = t|A = a)− P (T = t|A = a) if measuring A→ T P (Â = a|T = t)− P (A = a|T = t) if measuring T → A
(6)
Eqn. 4 generalizes BiasAmpMALS to measure the amplification of both positive and negative correlations between task t and attribute a, depending on y(t, a), when the attributes are non-binary,
as discussed in Sec. 3.2.1. Eqn. 5 identifies the direction of correlation of attribute a with task t, properly accounting for base rates as described in Sec. 3.2.2. Finally, Eqn. 6 decouples the two possible directions of bias amplification as in Sec. 3.2.3. Since values may be negative, reporting the aggregated bias amplification value could obscure task-attribute pairs that exhibit strong bias amplification; thus, disaggregated results per pair can be returned for a more detailed diagnosis.
Although in much of the examples we have and will look at, A = {woman,man}, this formulation allows for any attribute set to be defined, including intersectional identities. This is achieved by having A encompass the cross-product of possible attributes, for example A = {Black woman,Black man,white woman,white man}. We introduce a scenario for validating the decoupling aspect of our metric, that simultaneously serves as inspiration for an intervention approach to mitigating bias amplification. We use a baseline amplification removal idea of applying segmentation masks (noisy or full) over the people in an image to mitigate bias stemming from human attributes (Wang et al., 2019). We train on the COCO classification task a VGG16 (Simonyan & Zisserman, 2014) model pretrained on ImageNet (Russakovsky et al., 2015) to predict objects and gender, with a Binary Cross Entropy Loss over all outputs, and measure BiasAmpT→A and BiasAmpA→T ; we report 95% confidence intervals for 5 runs of each scenario. In Fig. 4 we see, as expected, that as less of the person is visible, A→T decreases because there are less human attribute visual cues to bias the task prediction. On the other hand, T→A increases because the model must lean into task biases to predict the person’s attribute. However, we can also see from the overlapping confidence intervals that this technique of bias amplification mitigation does not appear to be particularly robust; we continue a discussion of this phenomenon in Sec. 5.2. Further mitigation techniques are outside of our scope, but we look to works like Singh et al. (2020); Wang et al. (2019); Agarwal et al. (2020).
5 ANALYSIS AND DISCUSSION
We now discuss some of the normative issues surrounding bias amplification, starting in Sec. 5.1 with the existence of T→A bias amplification, which implies the prediction of sensitive attributes; in Sec. 5.2 about the need for confidence intervals to make robust conclusions; and in Sec. 5.3 about scenarios in which the original formulation of bias amplification as a desire to match base correlations may not be what is actually wanted.
5.1 T→ A BIAS AMPLIFICATION
If we think more deeply about these bias amplifications, we might come to a normative conclusion that T → A, which measures sensitive attribute predictions conditioned on the tasks, should not exist in the first place. There are very few situations in which predicting sensitive attributes makes sense (Scheuerman et al., 2020; Larson, 2017), so we should carefully consider if this is strictly necessary for target applications. For the image domains discussed, by simply removing the notion of predicting gender, we trivially remove all T → A bias amplification. In a similar vein, there has been great work done on reducing gender bias in image captions (Hendricks et al., 2018; Tang et al., 2020), but it is often focused on targeting T → A amplification rather than A → T . When
disentangling the directions of bias, we find that the Equalizer model (Hendricks et al., 2018), which was trained with the intention of increasing the quality of gender-specific words in captions, inadvertently increases A → T bias amplification for certain tasks. In Fig. 5 we see examples where the content of the Equalizer’s caption exhibits bias coming from the person’s attribute. Even though the Equalizer model reduces T → A bias amplification in these images, it inadvertently increases A→ T . It is important to disentangle the two directions of bias and notice that while one direction is becoming more fair, another is actually becoming more biased. Although this may not always be the case, depending on the downstream application, perhaps this is a setting in which we could consider simply replacing all instances of gendered words like “man” and “woman” in the captions with “person” to trivially eliminate T → A, and focus on A → T bias amplification. Specifically when gender is the sensitive attribute, Keyes (2018) thoroughly explains how we should carefully think about why we might implement Automatic Gender Recognition (AGR), and avoid doing so.
On the other hand, sensitive attribute labels, ideally from self-disclosure, can be very useful. For example, these labels are necessary to measureA→ T amplification, which is important to discover, as we do not want our prediction task to be biased for or against people with certain attributes.
5.2 LACK OF CONSISTENCY IN BIAS MEASUREMENT
Evaluation metrics, ours’ included, are specific to each model on each dataset. Under common loss functions such as Cross Entropy Loss, some evaluation metrics, like average precision, are not very sensitive to random seed. However, we noticed that bias amplification, along with other fairness metrics like FPR difference, often fluctuates greatly across runs. Because the loss functions that machine learning practitioners tend to default to using are proxies for accuracy, it makes sense that the various local minima, while equal in accuracy, are not necessarily equal in terms of other measurements. The phenomena of differences between equally predictive models has been termed the Rashomon Effect (Breiman, 2001), or predictive multiplicity (Marx et al., 2020).
Thus, like previous work (Fisher et al., 2019), we urge transparency, and advocate for the inclusion of confidence intervals. To illustrate the need for this, we look at the facial image domain of CelebA (Liu et al., 2015), defining the two tasks of classifying “big nose” and “young” as our T , and treating the gender labels as our attribute, A. Note that we are not going to classify gender, for reasons raised in Sec. 5.1, so we only measure A → T amplification. For these tasks, women are more correlated with no big nose and being young, and men with big nose and not being young. We examine two different scenarios, one where our independent variable is model architecture, and another where it is the ratio between number of images of the majority groups (e.g., young women and not young men) and minority groups (e.g., not young women and young men). By looking at the confidence intervals, we can determine which condition allows us to draw reliable conclusions about the impact of the variable on bias amplification.
For model architecture, we train 3 models pretrained on ImageNet (Russakovsky et al., 2015) across 5 runs: ResNet18 (He et al., 2016), AlexNet (Krizhevsky et al., 2012), and VGG16 (Simonyan & Zisserman, 2014). Training details are in Appendix A.3. In Fig. 6 we see from the confidence intervals that while model architecture does not result in differing enough of bias amplification to conclude anything about the relative fairness of these models, across-ratio differences are significant enough to draw conclusions about the impact of this ratio on bias amplification. We encourage researchers to include confidence intervals so that findings are more robust to random fluctuations.
5.3 CONSIDERATIONS FOR UNCERTAIN PREDICTION PROBLEMS
A property of bias amplification is that it does not conflict with having perfect accuracy. However, in turn, such a model with perfect accuracy would exactly replicate the correlations present in the training data. In this section we will discuss two cases that challenge this desire for matching training correlations: in the first, there will be no ground-truth labels from which to lift these correlations, and in the second, our goal is actually to not match the training correlations.
No Ground Truth. When we don’t have labels to derive training correlation rates from, bias amplification becomes harder to measure. Consider the fill-in-the-blank NLP task, where there is no ground-truth for how to fill in a sentence. Given “The [blank] went on a walk”, a variety of words could be equally suitable. Therefore, to measure bias amplification in this setting, we can manually set the base correlations, e.g., P (T = t|A = a), P (A = a|T = t). To see the effect that adjusting base correlations has, we test the bias amplification between occupations and gender pronouns, conditioning on the pronoun and filling in the occupation and vice versa. In Table 1, we report our measured bias amplification results on the FitBERT (Fill in the blanks BERT) (Havens & Stal, 2019; Devlin et al., 2019) model using various sources as our base correlation of bias from which amplification is measured. The same outputs from the model are used for each set of pronouns, and the independent variable we manipulate is the source of: 1) equality amongst the pronouns (using 2 and 3 pronouns), 2) co-occurrence counts from English Wikipedia (one of the datasets BERT was trained on), and 3) WinoBias (Zhao et al., 2018) with additional information supplemented from the 2016 U.S. Labor Force Statistics data. Details are in Appendix A.4. It is interesting to note that relative to U.S. Labor Force data on these particular occupations, FitBERT actually exhibits no bias amplification. For the occupation of hairdresser, the Labor statistics are biased at 92% women while FitBERT is at 80%, reflecting in fact a reduction in bias amplification. This demonstrates the importance of setting appropriate base correlations, because picking one that already exhibits strong amounts of bias will only flag models that further amplify this. In the next section we discuss another manifestation of this phenomenon, where the training correlation itself would be misleading to compare to, because of the strong amount of bias it contains.
Future Outcome Prediction. Next, we examine the risk prediction setting, where matching the base correlations may not be our desired goal. The labels here do not represent an objective ground truth because they: 1) suffer from problems like historical and selection bias (Suresh & Guttag, 2019; Olteanu et al., 2019; Green, 2020), and 2) will be used for prediction of a future event for which no one knows the answer. We will put aside the significant problems stemming from the first point for this current discussion, and focus on the latter. In this section, when using the word “predict” we will revert from the machine learning meaning and adopt the colloquial sense of the word, as in the forecast of a future event. What we had called object prediction, we will now call object labeling.
Consider bias amplification in the context of a domain like risk prediction relative to previous domains looked at, such as object detection. The difference is not in tabular versus vision, but rather in prediction versus labeling. The notion of “ground-truth” doesn’t quite exist the way we might think about it, because given the input features that define a particular person, one could imagine an individual with these features who does recidivate, and one who does not (Barocas et al., 2019). The training label is just how someone with these input features once acted, but is not necessarily a rigid indicator of how someone else with these features will act. On the other hand, in object labeling the labels are very much the ground-truth, and thus bias amplification is a reasonable metric to gauge fairness by. However, we do not know this for risk prediction, and thus, matching the training correlations should not be our intended goal (Wick et al., 2019). In Fig. 7 we show the metrics
we can see from the continued existence of a difference in FPR’s between the two groups, there is no threshold that could be picked at which this disparity does not exist (except at the trivial cases of a 0 or 10 threshold), even if there does exist thresholds where bias is not being amplified.
For each application, different metrics of fairness are more or less applicable, and BiasAmp→ is no different. It is crucial that we think thoughtfully when deciding how to evaluate a model. In previous applications we imagined the direction of systemic bias to be captured by the training data and thus we lifted base correlations of bias from there. However, one could imagine a similarly manual intervention as was done for NLP on other tasks like risk prediction, where a domain expert verifies the direction of bias determined by the training set, and even sets the base correlations.
6 CONCLUSION
In this paper, we take a deep dive into the measure of bias amplification. We introduce a new metric, BiasAmp→, and through the use of this metric and its directional components, diagnosing models will provide more actionable insights. Additionally, we discuss normative considerations, such as thinking carefully about why we might be performing sensitive attribute prediction, incorporating confidence intervals as the norm when reporting fairness metrics, and exercising care when determining which fairness metrics are applicable, and what assumptions they are encoding.
A APPENDIX
A.1 ADDITIONAL METRIC DETAILS
We provide additional details here about BiasAmp→, as defined in Sec. 4.
In practice the indicator variable, y(t, a), is computed over the statistics of the training set, whereas everything else is computed over the test set. The reason behind this, is that the direction of bias is determined by the existing biases in the training set. Additionally, for computing the integration across all thresholds, we sparsely sample from all of the probabilities outputted to compute an approximation.
Comparisons of the values outputted by BiasAmp→ should only be done relatively. In particular, within one of the directions at a time, either A → T or T → A, on one dataset. In particular, comparing A → T to T → A directly is not a signal as to which direction of amplification is stronger.
A.2 WALKING THROUGH THE EQUATIONS FOR FIGURE 1 SCENARIO
In this section we concretely write out the equations for BiasAmpMALS and BiasAmp→ for the scenario shown in Fig. 1, to better clarify what each metric captures. As a reminder, in this scenario A = {a0, a1} and T = {t0, t1}, and a0 is correlated with t0, and a1 with t1.
BiasAmpMALS = (7)
1
2 2∑ t=1 2∑ a=1 1 ( P (Aa = 1|Tt = 1) > 1 2 )( P (Âa = 1|T̂t = 1)− P (Aa = 1|Tt = 1) ) (8)
= 1
2
[ 1× ( P (Â0 = 1|T̂0 = 1)− P (A0 = 1|T0 = 1) ) + (9)
0× ( P (Â1 = 1|T̂0 = 1)− P (A1 = 1|T0 = 1) ) + (10)
0× ( P (Â0 = 1|T̂1 = 1)− P (A0 = 1|T1 = 1) ) + (11)
1× ( P (Â1 = 1|T̂1 = 1)− P (A1 = 1|T1 = 1) )] (12)
= 1
2
[( P (Â0 = 1|T̂0 = 1)− P (A0 = 1|T0 = 1) ) (13)
+ ( P (Â1 = 1|T̂1 = 1)− P (A1 = 1|T1 = 1) )] (14)
For BiasAmp→, our equation simplifies in the case of discrete predictions as follows:
BiasAmpA→T = 1
4
[( P (T̂0 = 1|A0 = 1)− P (T0 = 1|A0 = 1) ) (15)
− ( P (T̂0 = 1|A1 = 1)− P (T0 = 1|A1 = 1) ) (16)
− ( P (T̂1 = 1|A0 = 1)− P (T1 = 1|A0 = 1) ) (17)
+ ( P (T̂1 = 1|A1 = 1)− P (T1 = 1|A1 = 1) )] (18)
(19)
BiasAmpT→A = 1
4
[( P (Â0 = 1|T0 = 1)− P (A0 = 1|T0 = 1) ) (20)
− ( P (Â0 = 1|T1 = 1)− P (A0 = 1|T1 = 1) ) (21)
− ( P (Â1 = 1|T0 = 1)− P (A1 = 1|T0 = 1) ) (22)
+ ( P (Â1 = 1|T1 = 1)− P (A1 = 1|T1 = 1) )] (23)
(24)
A.3 DETAILS AND EXPERIMENT FROM LACK OF CONSISTENCY IN BIAS
For the models we trained in Sec. 5.2, we performed hyperparameter tuning on the validation set, and ended up using the following: ResNet18 had a learning rate of .0001, AlexNet of .0003, and VGG16 of .00014. All models were trained with stochastic gradient descent, a batch size of 64, and 10 epochs.
A.4 DETAILS ON MEASURING BIAS AMPLIFICATION IN FITBERT
Here we provide additional details behind the numbers presented in Tbl. 1 in Sec. 5.3.
As noted, and done, by Liang et al. (2020), a large and diverse corpus of sentences is needed to sample from the large variety of contexts. However, that is out of scope for this work, where we simply use 2 sentences: “[he/she/(they)] is a(n) [occupation]” or “[he/she/(they)] was a(n) [occupation]” to test.
When calculating the amount of bias amplification when the base rates are equal, we picked the direction of bias based on that provided by the WinoBias dataset. In practice, this can be thought of as setting the base correlation, P (A = a|T = t) for a men-biased job like “cook” to be .5 + for “he” and .5 − for “she” when there are two pronouns, and .33 + for “he” and .33 − for “she” and “they”, where in practice we used = 1e−7. This ensures that the indicator variable, y(t, a) from Eq. 5, is set in the direction fo the gender bias, but the magnitudes of ∆(t, a) from Eq. 6 are not affected to a significant degree.
To generate a rough approximation of what training correlation rates could look like in this domain, we look to one of the datasets that BERT was trained on, the Wikipedia dataset. We do so by simply counting the cooccurrences of all the occupations along with gendered words such as “man”, “he”, “him”, etc. There are flaws with this approach because in a sentence like “She went to see the doctor.”, the pronoun is in fact not referring to the gender of the person with the occupation. However, we leave a more accurate measurement of this to future work, as our aim for showing these results was more for demonstrative purposes illustrating the manipulation of the correlation rate, rather than in rigorously measuring the training correlation rate.
We use 32 rather than 40 occupations in WinoBias Zhao et al. (2018), because when we went to the 2016 U.S. Labor Force Statistics data (of Labor Statistics, 2016) to collect the actual numbers of each gender and occupation in order to be able to calculate P (T = t|A = a), since WinoBias only had P (A = a|T = t), we found 8 occupations to be too ambiguous to be able to determine the actual numbers. For example, for “attendant”, there were many different attendant jobs listed, such as “flight attendants” and “parking lot attendant”, so we opted rather to drop these jobs from the list of 40. The 8 from the original WinoBias dataset that we ignored are: supervisor, manager, mechanician, CEO, teacher, assistant, clerk, and attendant. The first four are biased towards men, and the latter four towards women, so that we did not skew the distribution of jobs biased towards each gender. | 1. What are the concerns regarding the novelty and contribution of the paper?
2. How does the reviewer assess the technical rigor of the paper, particularly in terms of theoretical analysis?
3. What are the suggestions for improving the paper's discussion on error bars and experimental sections?
4. Why does the reviewer think the paper should include analysis of benchmark fairness datasets?
5. What are the specific inaccuracies or lacking evidence in the paper's claims, according to the reviewer?
6. How does the reviewer evaluate the overall quality and impact of the paper? | Review | Review
The paper talks about an interesting aspect in algorithmic fairness meaning bias amplification; however, there are couple of concerns that I will raise below:
The paper mainly addresses shortcomings of a previously mentioned simple concept, bias amplification, brought up in Zhao et al 2017. In this regards, I am not sure about extensiveness of novelty and contribution of this paper along with its technical rigor. One suggestion would be to add some theoretical analysis maybe in Validating the metric session.
Although Zhao et al's paper is a famous paper in NLP domain, not much attention has not been given in the pure algorithmic fairness and using it as a measure along with other well known measures, such as statistical parity or EO. This makes me even more concerned about an extension to this work.
The discussion on error bars and their need sounds like an intuitive concept which authors spent a section on it. Other sections could have been made richer experimentally instead of this section maybe.
Analysis of benchmark fairness datasets remains unexplored. Mostly vision datasets are explored, but authors could have done some studies on famous fairness benchmark datasets as well. This can introduce bias amplification concept to the fairness community and make it more comparable to other well known fairness measures and more acceptable to the community.
There are some claims that I do not find accurate in the paper as follows: 5.1. "By definition, this problem stems from the algorithms, and can not be attributed to the dataset" -> we can not 100% say this! The bias in the first place is coming from data itself in this case! 5.2. "A trait of bias amplification is that it is not at odds with accuracy, unlike many other metrics ..." -> where is the proof for this? need strong evidence for this claim.
Overall, this paper addresses shortcomings in a previous work based on a simple bias amplification metric. The lack of technical rigor and theoretical guaranties for some claims makes this paper not a strong candidate. Inclusion of some fairness benchmark datasets can make this paper more strong. |
ICLR | Title
A Technical and Normative Investigation of Social Bias Amplification
Abstract
The conversation around the fairness of machine learning models is growing and evolving. In this work, we focus on the issue of bias amplification: the tendency of models trained from data containing social biases to further amplify these biases. This problem is brought about by the algorithm, on top of the level of bias already present in the data. We make two main contributions regarding its measurement. First, building off of Zhao et al. (2017), we introduce and analyze a new, decoupled metric for measuring bias amplification, BiasAmp→, which possesses a number of attractive properties, including the ability to pinpoint the cause of bias amplification. Second, we thoroughly analyze and discuss the normative implications of this metric. We provide suggestions about its measurement by cautioning against predicting sensitive attributes, encouraging the use of confidence intervals due to fluctuations in the fairness of models across runs, and discussing what bias amplification means in the context of domains where labels either don’t exist at test time or correspond to uncertain future events. Throughout this paper, we work to provide a deeply interrogative look at the technical measurement of bias amplification, guided by our normative ideas of what we want it to encompass.
1 INTRODUCTION
The machine learning community is becoming increasingly cognizant of problems surrounding fairness and bias, and correspondingly a plethora of new algorithms and metrics are being proposed (see e.g., Mehrabi et al. (2019) for a review). The gatekeepers checking the systems to be deployed often take the form of fairness evaluation metrics, and it is vital that these be deeply investigated both technically and normatively. In this paper, we endeavor to do this for bias amplification. Bias amplification happens when a model exacerbates biases from the training data at test time. It is the result of the algorithm (Foulds et al., 2018), and unlike other forms of bias, cannot be solely attributed to the dataset.
To this end, we propose a new way of measuring bias amplification, BiasAmp→ 1, that builds off a prior metric from Men Also Like Shopping (Zhao et al., 2017), that we will call BiasAmpMALS. Our metric’s technical composition aligns with the real-world qualities we want it to encompass, addressing a number of the previous metric’s shortcomings by being able to: 1) generalize beyond binary attributes, 2) take into account the base rates that people of each attribute appear, and 3) disentangle the directions of amplification. Concretely, consider a visual dataset (Fig. 1) where each image has a label for the task, T , which is painting or not painting, and further is associated with a protected attribute, A, which is woman or man. If the gender of the person biases the prediction of the task, we consider this A→ T bias amplification; if the reverse happens, then T → A. In our normative discussion, we discuss a few topics. We consider whether predicting protected attributes is necessary in the first place; by not doing so, we can trivially remove T → A amplification. We also encourage the use of confidence intervals when using our metric because BiasAmp→, along with other fairness metrics, suffers from the Rashomon Effect (Breiman, 2001), or multiplicity of good models. In deep neural networks, random seeds have relatively little impact on accuracy; however, that is not the case for fairness, which is more brittle to randomness.
1The arrow in BiasAmp→ is meant to signify the direction that bias amplification is flowing, and not intended to be a claim about causality.
Notably, a trait of bias amplification is that it is not at odds with accuracy, unlike many other fairness metrics, because the goal of not amplifying biases and matching task-attribute correlations is aligned with that of accurate predictions. For example, imagine a dataset where the positive outcome is associated at a higher rate with group A than with group B. A classifier that achieves 100% accuracy at predicting the positive outcome is not amplifying bias; however, according to metrics like demographic parity, this perfect classifier is still perpetuating bias because it is predicting the positive label at different rates for both groups. While matching training correlations is desired in object detection where systems should perfectly predict the labels, we will explore the nuances of what this means in situations where the validity of the labels, and thus task-attribute correlations themselves, are up for debate. For example, in the risk prediction task which assesses someone’s likelihood of recidivism, the label represents whether someone with a set of input features ended up recidivating, but is not a steadfast indicator of what another person with the same input features will do. Here, we would not want to replicate the task-attribute correlations at test time, and it is important to keep this in mind when deciding what fairness metrics to apply. The notion of amplification also allows us to encapsulate the idea that systemic harms and biases can be more harmful than errors made without such a history (Bearman et al., 2009); for example, in images overclassifying women as cooking carries more of a negative connotation than overclassifying men as cooking.2 Distinguishing between which errors are more harmful than others is a pattern that can often be lifted from the training data.
To ground our work, we first distinguish what bias amplification captures that standard fairness metrics cannot, and then distinguish BiasAmp→ from BiasAmpMALS. Our key contributions are: 1) proposing a new way to measure bias amplification, addressing multiple shortcomings of prior work and allowing us to better diagnose where a model goes wrong, and 2) providing a technical analysis and normative discussion around the use of this measure in diverse settings, encouraging thoughtfulness with each application.
2 RELATED WORK
Fairness Measurements. Fairness is nebulous and context-dependent, and approaches to quantifying it (Verma & Rubin, 2018; Buolamwini & Gebru, 2018) include equalized odds (Hardt et al., 2016), equal opportunity (Hardt et al., 2016), demographic parity (Dwork et al., 2012; Kusner et al., 2017), fairness through awareness (Dwork et al., 2012; Kusner et al., 2017), fairness through unawareness (Grgic-Hlaca et al., 2016; Kusner et al., 2017), and treatment equality (Berk et al., 2017). We examine bias amplification, which is a type of group fairness where correlations are amplified.
Bias Amplification. Bias amplification has been measured by looking at binary classifications (Leino et al., 2019), GANs (Jain et al., 2020; Choi et al., 2020), and correlations (Zhao et al., 2017). Wang et al. (2019) measures this using dataset leakage and model leakage. The difference between these values is the level of bias amplification, but this is not a fair comparison because the
2We use the terms man and woman to refer to binarized socially-perceived gender expression, recognizing these labels are not inclusive, and in vision datasets are often assigned by annotators rather than self-disclosed.
attribute classifier gets discrete labels for the former but continuous model outputs for the latter. Jia et al. (2020) looks at output distributions like we do, but with a different formulation.
The Word Embedding Association Test (WEAT) (Caliskan et al., 2017) measures bias amplification in de-contextualized word embeddings, looking at correlations but not causations (Bolukbasi et al., 2016). However, with newer models like BERT and ELMo that have contextualized embeddings, WEAT does not work (May et al., 2019), so new techniques have been proposed incorporating context (Lu et al., 2019; Kuang & Davison, 2016). We use these models to measure the directional aspect of these amplifications, as well as to situate them in the broader world of bias amplification.
Directionality. Directionality of amplification has been observed in computer vision (Stock & Cisse, 2018) and language (Qian et al., 2019). It has also been studied with causality (Bhattacharya & Vogt, 2007; Wooldridge, 2016; Pearl, 2010; Middleton et al., 2016; Steiner & Yongnam, 2016). We take a deeper and more empirical approach.
Predictive Multiplicity. The Rashomon Effect (Breiman, 2001), or multiplicity of good models, has been studied in various contexts. The variables investigated that differ across good models include explanations (Hancox-Li, 2020), individual treatments (Marx et al., 2020; Pawelczyk et al., 2020), and variable importance (Fisher et al., 2019; Dong & Rudin, 2019). We build on these discoveries and investigate how fairness measurements also differ between equally “good” models.
3 EXISTING FAIRNESS METRICS
In this section we present existing fairness metrics and show how bias amplification can distinguish errors resulting from under- and overclassification in a way that others cannot, followed by a discussion of the shortcomings of BiasAmpMALS.
3.1 OVERVIEW OF EXISTING FAIRNESS METRICS
We begin with a review of existing fairness metrics in a concrete classification setting. We consider again the example from Fig. 1, where on this dataset women (a0) are correlated with painting (t0), and men (a1) with not painting (t1), such that there are N images each of (a0, t0) and (a1, t1) but only N/2 images of (a0, t1) and (a1, t0). A classifier trained to recognize painting on this data is likely to learn this association and over-predict painting on images of women and under-predict painting on images of men; however, algorithmic interventions may counteract this effect and in fact result in the opposite behavior.
Fig. 2 shows the behavior of fairness metrics under varying amounts of learned amplification of the correlation. The four fairness metrics are: False Positive Rate (FPR) and True Positive Rate (TPR) difference: the difference in false positive (true positive) rate of predicting the label t0 on images of a1 versus on images of a0 (Chouldechova, 2016; Hardt et al., 2016), accuracy difference: difference between the overall task prediction accuracy on images of a1 versus on images of a0 (Berk et al., 2017), and mean accuracy across subgroups: mean task prediction accuracy across the four image subgroups ((a0, t0), (a1, t0), (a0, t1), and (a1, t1)) (Buolamwini & Gebru, 2018). However, these metrics are not designed to account for the training correlations, and are unable to distinguish between cases of increased or decreased learned correlations, as seen in Fig. 2.
Zhao et al. (2017) introduced an alternative to these that explicitly captures the notion of bias amplification. Concretely, they consider P (A = a|T = t) of the training data as the fraction of times a protected attribute a ∈ A appears on images corresponding to task t ∈ T . They then compare this with the test-time predictions made by the model, P (Â = a|T̂ = t), or the number of times attribute a is predicted on images where the task is predicted as t, which allows them to measure bias amplification in the absence of any additional annotations on the hold-out set. Note that in this formulation they are assuming that the model is making predictions for both the task and the attribute. The full bias amplification metric (reformulated in our terms), is computed as BiasAmpMALS =
1
|T | |T |∑ t=1 |A|∑ a=1 1 ( P (Aa = 1|Tt = 1) > 1 |A| ) ︸ ︷︷ ︸
y(t,a)
( P (Âa = 1|T̂t = 1)− P (Aa = 1|Tt = 1) ) ︸ ︷︷ ︸
∆(t,a)
(1)
Fig. 2 empirically demonstrates that this metric is able to capture the level of increasing bias amplification. (For consistency in comparison with prior metrics, we assume the model always correctly predicts the protected attribute A.) However, as we discuss in the next section, there are some properties of bias amplification that this metric is not able to capture: for example, it does not distinguish between errors in predicting the protected attribute versus errors in predicting the task. Thus we introduce a new metric (last graph of Fig. 2) which maintains the desirable properties from Zhao et al. (2017) while including a number of innovations.
3.2 SHORTCOMINGS OF BIASAMPMALS
Despite the advantages we just documented about BiasAmpMALS (Eqn. 1) and its ability to distinguish under- and overclassifications of training correlations, this metric also suffers from a number of shortcomings. To ground our discussion, we will work directly with the model outputs released by Zhao et al. (2017) from their Conditional Random Field (CRF) model on COCO (Lin et al., 2014), which has predictions for gender and objects detected for each image.
3.2.1 NON-BINARY ATTRIBUTES
The first shortcoming is that the metric assumes that the protected attributes are binary, limiting its use: the indicator variable y(t, a) implicitly chooses only one of the attribute a ∈ A to be associated with every task t. Consider a task t0 ∈ T such that a0 ∈ A is associated with it, but none of the other ai ∈ A are, where i 6= 0. In this scenario, diff(t0, ai) is only considered when there is one other ai such that ai = ¬a, since diff(t, a) = −diff(t,¬a). A simple addition of −diff for all ai’s when y is 0 ensures that when there are more than two groups, their bias amplification is also counted.
3.2.2 BASE RATES
The second shortcoming of BiasAmpMALS is the fact that the metric does not take into account the base rates of each attribute. Concretely, when determining in y(t, a) of Eqn. 1 whether the attribute a is correlated with the task t, P (A = a|T = t) is compared to 1|A| . However, this assumes that all a’s within A are evenly distributed, which may not be the case. For example, in COCO there are about 2.5x as many men as women, so it would appear that most objects positively correlate with men simply by nature of there being an overrepresentation of men. Consider the object oven; BiasAmpMALS calculates P (A = man|T = oven) = 0.56 > 12 and thus considers this object to be correlated with men rather than women. However, computing P (A = man, T = oven) = 0.0103 < 0.0129 = P (A = man)P (T = oven) reveals that men are in fact not correlated with oven, and the seeming overrepresentation comes from the fact that men are overrepresented in the dataset more generally. Not surprisingly, the model trained on this data learns to associate women with ovens and underpredicts men with ovens at test time, i.e., P (Â = man|T̂ = oven)− P (A = man|T = oven) = −0.10. BiasAmpMALS erroneously counts this as inverse bias amplification.
3.2.3 ENTANGLING DIRECTIONS
Another shortcoming we observe is the inability to distinguish between different types of bias amplification. Zhao et al. (2017) discovers that “Technology oriented categories initially biased toward men such as keyboard ... have each increased their bias toward males by over 0.100.” Concretely, from Eqn. 1 P (A = man|T = keyboard) = .70 and P (Â = man|T̂ = keyboard) = .83, demonstrating an amplification of bias. However, the direction or cause of bias amplification remains unclear: is the presence of man in the image increasing the probability of predicting a keyboard, or vice versa? Looking more closely at the model’s disentangled predictions, we see that:
P (T̂ = keyboard|A = man) = 0.0020 < 0.0032 = P (T = keyboard|A = man) (2) P (Â = man|T = keyboard) = 0.78 > 0.70 = P (A = man|T = keyboard) (3)
indicating that keyboards are under-predicted on images with men yet men are over-predicted on images with keyboards. Thus the root cause of this amplification appears to be in the gender predictor rather than the object detector. Such disentangement allows us to properly focus algorithmic intervention efforts. This also highlights the need to consider the ground truth labels on the hold-out set when measuring bias amplification in addition to the predictions (since when considering only the predictions, it is impossible to decouple the different sources of bias).
3.3 THRESHOLD
We also fully replicate the original experiment from Zhao et al. (2017) using BiasAmpMALS on their model predictions and measure .040. However, we observe that “man” is being predicted at a higher rate (75.6%) than is actually present (71.2%). With this insight, we tune the decision threshold on the validation set such that the gender predictor is well-calibrated to be predicting the same percentage of images to have men as the dataset actually has. When we calculate BiasAmpMALS on these newly thresholded predictions for the test set, we see bias amplification drop from 0.040 to 0.001 just as a result of this threshold change, outperforming even the solution proposed in Zhao et al. (2017) of corpus-level constraints, which achieved a drop to only 0.021. Fairness can be quite sensitive to the threshold chosen (Chen & Wu, 2020), so careful selection should be done in picking the threshold, rather than using the default of .5. In Fig. 3 we show how the amount of bias amplification, as measured by BiasAmpMALS and BiasAmpT→A, changes as we vary the threshold, i.e., proportion of people classified to be a man. We can see that when the threshold is chosen to be the one wellcalibrated on the validation set rather than the default threshold, bias amplification is measured to be closer to zero for both metrics. From here on out when a threshold is needed, we will pick it to be well-calibrated on the validation set. Although we do not take this approach, one could also imagine integrating bias amplification across proportion in order to have a threshold-agnostic measure of bias amplification, similar to what is proposed by Chen & Wu (2020). We do not do this in our experiments because at deployment time, it is often the case that discrete predictions are required.
4 BIASAMP→
Now we present our metric, BiasAmp→, which retains the desirable properties of BiasAmpMALS, while addressing the shortcomings noted in the previous section. To account for the need to disentangle the two possible directions of bias amplification (Sec. 3.2.3) the metric consists of two values, BiasAmpA→T corresponding to the amplification of bias resulting from the protected attribute influencing the task prediction, and BiasAmpT→A, corresponding to the amplification of bias resulting from the task influencing the protected attribute prediction. Concretely, the metric is defined as:
BiasAmp→ = 1 |A||T | ∑
t∈T ,a∈A y(t, a)∆(t, a) + (1− y(t, a))(−∆(t, a)) (4)
where y(t, a) = 1 [P (T = t, A = a) > P (T = t)P (A = a)] (5)
∆(t, a) = { P (T̂ = t|A = a)− P (T = t|A = a) if measuring A→ T P (Â = a|T = t)− P (A = a|T = t) if measuring T → A
(6)
Eqn. 4 generalizes BiasAmpMALS to measure the amplification of both positive and negative correlations between task t and attribute a, depending on y(t, a), when the attributes are non-binary,
as discussed in Sec. 3.2.1. Eqn. 5 identifies the direction of correlation of attribute a with task t, properly accounting for base rates as described in Sec. 3.2.2. Finally, Eqn. 6 decouples the two possible directions of bias amplification as in Sec. 3.2.3. Since values may be negative, reporting the aggregated bias amplification value could obscure task-attribute pairs that exhibit strong bias amplification; thus, disaggregated results per pair can be returned for a more detailed diagnosis.
Although in much of the examples we have and will look at, A = {woman,man}, this formulation allows for any attribute set to be defined, including intersectional identities. This is achieved by having A encompass the cross-product of possible attributes, for example A = {Black woman,Black man,white woman,white man}. We introduce a scenario for validating the decoupling aspect of our metric, that simultaneously serves as inspiration for an intervention approach to mitigating bias amplification. We use a baseline amplification removal idea of applying segmentation masks (noisy or full) over the people in an image to mitigate bias stemming from human attributes (Wang et al., 2019). We train on the COCO classification task a VGG16 (Simonyan & Zisserman, 2014) model pretrained on ImageNet (Russakovsky et al., 2015) to predict objects and gender, with a Binary Cross Entropy Loss over all outputs, and measure BiasAmpT→A and BiasAmpA→T ; we report 95% confidence intervals for 5 runs of each scenario. In Fig. 4 we see, as expected, that as less of the person is visible, A→T decreases because there are less human attribute visual cues to bias the task prediction. On the other hand, T→A increases because the model must lean into task biases to predict the person’s attribute. However, we can also see from the overlapping confidence intervals that this technique of bias amplification mitigation does not appear to be particularly robust; we continue a discussion of this phenomenon in Sec. 5.2. Further mitigation techniques are outside of our scope, but we look to works like Singh et al. (2020); Wang et al. (2019); Agarwal et al. (2020).
5 ANALYSIS AND DISCUSSION
We now discuss some of the normative issues surrounding bias amplification, starting in Sec. 5.1 with the existence of T→A bias amplification, which implies the prediction of sensitive attributes; in Sec. 5.2 about the need for confidence intervals to make robust conclusions; and in Sec. 5.3 about scenarios in which the original formulation of bias amplification as a desire to match base correlations may not be what is actually wanted.
5.1 T→ A BIAS AMPLIFICATION
If we think more deeply about these bias amplifications, we might come to a normative conclusion that T → A, which measures sensitive attribute predictions conditioned on the tasks, should not exist in the first place. There are very few situations in which predicting sensitive attributes makes sense (Scheuerman et al., 2020; Larson, 2017), so we should carefully consider if this is strictly necessary for target applications. For the image domains discussed, by simply removing the notion of predicting gender, we trivially remove all T → A bias amplification. In a similar vein, there has been great work done on reducing gender bias in image captions (Hendricks et al., 2018; Tang et al., 2020), but it is often focused on targeting T → A amplification rather than A → T . When
disentangling the directions of bias, we find that the Equalizer model (Hendricks et al., 2018), which was trained with the intention of increasing the quality of gender-specific words in captions, inadvertently increases A → T bias amplification for certain tasks. In Fig. 5 we see examples where the content of the Equalizer’s caption exhibits bias coming from the person’s attribute. Even though the Equalizer model reduces T → A bias amplification in these images, it inadvertently increases A→ T . It is important to disentangle the two directions of bias and notice that while one direction is becoming more fair, another is actually becoming more biased. Although this may not always be the case, depending on the downstream application, perhaps this is a setting in which we could consider simply replacing all instances of gendered words like “man” and “woman” in the captions with “person” to trivially eliminate T → A, and focus on A → T bias amplification. Specifically when gender is the sensitive attribute, Keyes (2018) thoroughly explains how we should carefully think about why we might implement Automatic Gender Recognition (AGR), and avoid doing so.
On the other hand, sensitive attribute labels, ideally from self-disclosure, can be very useful. For example, these labels are necessary to measureA→ T amplification, which is important to discover, as we do not want our prediction task to be biased for or against people with certain attributes.
5.2 LACK OF CONSISTENCY IN BIAS MEASUREMENT
Evaluation metrics, ours’ included, are specific to each model on each dataset. Under common loss functions such as Cross Entropy Loss, some evaluation metrics, like average precision, are not very sensitive to random seed. However, we noticed that bias amplification, along with other fairness metrics like FPR difference, often fluctuates greatly across runs. Because the loss functions that machine learning practitioners tend to default to using are proxies for accuracy, it makes sense that the various local minima, while equal in accuracy, are not necessarily equal in terms of other measurements. The phenomena of differences between equally predictive models has been termed the Rashomon Effect (Breiman, 2001), or predictive multiplicity (Marx et al., 2020).
Thus, like previous work (Fisher et al., 2019), we urge transparency, and advocate for the inclusion of confidence intervals. To illustrate the need for this, we look at the facial image domain of CelebA (Liu et al., 2015), defining the two tasks of classifying “big nose” and “young” as our T , and treating the gender labels as our attribute, A. Note that we are not going to classify gender, for reasons raised in Sec. 5.1, so we only measure A → T amplification. For these tasks, women are more correlated with no big nose and being young, and men with big nose and not being young. We examine two different scenarios, one where our independent variable is model architecture, and another where it is the ratio between number of images of the majority groups (e.g., young women and not young men) and minority groups (e.g., not young women and young men). By looking at the confidence intervals, we can determine which condition allows us to draw reliable conclusions about the impact of the variable on bias amplification.
For model architecture, we train 3 models pretrained on ImageNet (Russakovsky et al., 2015) across 5 runs: ResNet18 (He et al., 2016), AlexNet (Krizhevsky et al., 2012), and VGG16 (Simonyan & Zisserman, 2014). Training details are in Appendix A.3. In Fig. 6 we see from the confidence intervals that while model architecture does not result in differing enough of bias amplification to conclude anything about the relative fairness of these models, across-ratio differences are significant enough to draw conclusions about the impact of this ratio on bias amplification. We encourage researchers to include confidence intervals so that findings are more robust to random fluctuations.
5.3 CONSIDERATIONS FOR UNCERTAIN PREDICTION PROBLEMS
A property of bias amplification is that it does not conflict with having perfect accuracy. However, in turn, such a model with perfect accuracy would exactly replicate the correlations present in the training data. In this section we will discuss two cases that challenge this desire for matching training correlations: in the first, there will be no ground-truth labels from which to lift these correlations, and in the second, our goal is actually to not match the training correlations.
No Ground Truth. When we don’t have labels to derive training correlation rates from, bias amplification becomes harder to measure. Consider the fill-in-the-blank NLP task, where there is no ground-truth for how to fill in a sentence. Given “The [blank] went on a walk”, a variety of words could be equally suitable. Therefore, to measure bias amplification in this setting, we can manually set the base correlations, e.g., P (T = t|A = a), P (A = a|T = t). To see the effect that adjusting base correlations has, we test the bias amplification between occupations and gender pronouns, conditioning on the pronoun and filling in the occupation and vice versa. In Table 1, we report our measured bias amplification results on the FitBERT (Fill in the blanks BERT) (Havens & Stal, 2019; Devlin et al., 2019) model using various sources as our base correlation of bias from which amplification is measured. The same outputs from the model are used for each set of pronouns, and the independent variable we manipulate is the source of: 1) equality amongst the pronouns (using 2 and 3 pronouns), 2) co-occurrence counts from English Wikipedia (one of the datasets BERT was trained on), and 3) WinoBias (Zhao et al., 2018) with additional information supplemented from the 2016 U.S. Labor Force Statistics data. Details are in Appendix A.4. It is interesting to note that relative to U.S. Labor Force data on these particular occupations, FitBERT actually exhibits no bias amplification. For the occupation of hairdresser, the Labor statistics are biased at 92% women while FitBERT is at 80%, reflecting in fact a reduction in bias amplification. This demonstrates the importance of setting appropriate base correlations, because picking one that already exhibits strong amounts of bias will only flag models that further amplify this. In the next section we discuss another manifestation of this phenomenon, where the training correlation itself would be misleading to compare to, because of the strong amount of bias it contains.
Future Outcome Prediction. Next, we examine the risk prediction setting, where matching the base correlations may not be our desired goal. The labels here do not represent an objective ground truth because they: 1) suffer from problems like historical and selection bias (Suresh & Guttag, 2019; Olteanu et al., 2019; Green, 2020), and 2) will be used for prediction of a future event for which no one knows the answer. We will put aside the significant problems stemming from the first point for this current discussion, and focus on the latter. In this section, when using the word “predict” we will revert from the machine learning meaning and adopt the colloquial sense of the word, as in the forecast of a future event. What we had called object prediction, we will now call object labeling.
Consider bias amplification in the context of a domain like risk prediction relative to previous domains looked at, such as object detection. The difference is not in tabular versus vision, but rather in prediction versus labeling. The notion of “ground-truth” doesn’t quite exist the way we might think about it, because given the input features that define a particular person, one could imagine an individual with these features who does recidivate, and one who does not (Barocas et al., 2019). The training label is just how someone with these input features once acted, but is not necessarily a rigid indicator of how someone else with these features will act. On the other hand, in object labeling the labels are very much the ground-truth, and thus bias amplification is a reasonable metric to gauge fairness by. However, we do not know this for risk prediction, and thus, matching the training correlations should not be our intended goal (Wick et al., 2019). In Fig. 7 we show the metrics
we can see from the continued existence of a difference in FPR’s between the two groups, there is no threshold that could be picked at which this disparity does not exist (except at the trivial cases of a 0 or 10 threshold), even if there does exist thresholds where bias is not being amplified.
For each application, different metrics of fairness are more or less applicable, and BiasAmp→ is no different. It is crucial that we think thoughtfully when deciding how to evaluate a model. In previous applications we imagined the direction of systemic bias to be captured by the training data and thus we lifted base correlations of bias from there. However, one could imagine a similarly manual intervention as was done for NLP on other tasks like risk prediction, where a domain expert verifies the direction of bias determined by the training set, and even sets the base correlations.
6 CONCLUSION
In this paper, we take a deep dive into the measure of bias amplification. We introduce a new metric, BiasAmp→, and through the use of this metric and its directional components, diagnosing models will provide more actionable insights. Additionally, we discuss normative considerations, such as thinking carefully about why we might be performing sensitive attribute prediction, incorporating confidence intervals as the norm when reporting fairness metrics, and exercising care when determining which fairness metrics are applicable, and what assumptions they are encoding.
A APPENDIX
A.1 ADDITIONAL METRIC DETAILS
We provide additional details here about BiasAmp→, as defined in Sec. 4.
In practice the indicator variable, y(t, a), is computed over the statistics of the training set, whereas everything else is computed over the test set. The reason behind this, is that the direction of bias is determined by the existing biases in the training set. Additionally, for computing the integration across all thresholds, we sparsely sample from all of the probabilities outputted to compute an approximation.
Comparisons of the values outputted by BiasAmp→ should only be done relatively. In particular, within one of the directions at a time, either A → T or T → A, on one dataset. In particular, comparing A → T to T → A directly is not a signal as to which direction of amplification is stronger.
A.2 WALKING THROUGH THE EQUATIONS FOR FIGURE 1 SCENARIO
In this section we concretely write out the equations for BiasAmpMALS and BiasAmp→ for the scenario shown in Fig. 1, to better clarify what each metric captures. As a reminder, in this scenario A = {a0, a1} and T = {t0, t1}, and a0 is correlated with t0, and a1 with t1.
BiasAmpMALS = (7)
1
2 2∑ t=1 2∑ a=1 1 ( P (Aa = 1|Tt = 1) > 1 2 )( P (Âa = 1|T̂t = 1)− P (Aa = 1|Tt = 1) ) (8)
= 1
2
[ 1× ( P (Â0 = 1|T̂0 = 1)− P (A0 = 1|T0 = 1) ) + (9)
0× ( P (Â1 = 1|T̂0 = 1)− P (A1 = 1|T0 = 1) ) + (10)
0× ( P (Â0 = 1|T̂1 = 1)− P (A0 = 1|T1 = 1) ) + (11)
1× ( P (Â1 = 1|T̂1 = 1)− P (A1 = 1|T1 = 1) )] (12)
= 1
2
[( P (Â0 = 1|T̂0 = 1)− P (A0 = 1|T0 = 1) ) (13)
+ ( P (Â1 = 1|T̂1 = 1)− P (A1 = 1|T1 = 1) )] (14)
For BiasAmp→, our equation simplifies in the case of discrete predictions as follows:
BiasAmpA→T = 1
4
[( P (T̂0 = 1|A0 = 1)− P (T0 = 1|A0 = 1) ) (15)
− ( P (T̂0 = 1|A1 = 1)− P (T0 = 1|A1 = 1) ) (16)
− ( P (T̂1 = 1|A0 = 1)− P (T1 = 1|A0 = 1) ) (17)
+ ( P (T̂1 = 1|A1 = 1)− P (T1 = 1|A1 = 1) )] (18)
(19)
BiasAmpT→A = 1
4
[( P (Â0 = 1|T0 = 1)− P (A0 = 1|T0 = 1) ) (20)
− ( P (Â0 = 1|T1 = 1)− P (A0 = 1|T1 = 1) ) (21)
− ( P (Â1 = 1|T0 = 1)− P (A1 = 1|T0 = 1) ) (22)
+ ( P (Â1 = 1|T1 = 1)− P (A1 = 1|T1 = 1) )] (23)
(24)
A.3 DETAILS AND EXPERIMENT FROM LACK OF CONSISTENCY IN BIAS
For the models we trained in Sec. 5.2, we performed hyperparameter tuning on the validation set, and ended up using the following: ResNet18 had a learning rate of .0001, AlexNet of .0003, and VGG16 of .00014. All models were trained with stochastic gradient descent, a batch size of 64, and 10 epochs.
A.4 DETAILS ON MEASURING BIAS AMPLIFICATION IN FITBERT
Here we provide additional details behind the numbers presented in Tbl. 1 in Sec. 5.3.
As noted, and done, by Liang et al. (2020), a large and diverse corpus of sentences is needed to sample from the large variety of contexts. However, that is out of scope for this work, where we simply use 2 sentences: “[he/she/(they)] is a(n) [occupation]” or “[he/she/(they)] was a(n) [occupation]” to test.
When calculating the amount of bias amplification when the base rates are equal, we picked the direction of bias based on that provided by the WinoBias dataset. In practice, this can be thought of as setting the base correlation, P (A = a|T = t) for a men-biased job like “cook” to be .5 + for “he” and .5 − for “she” when there are two pronouns, and .33 + for “he” and .33 − for “she” and “they”, where in practice we used = 1e−7. This ensures that the indicator variable, y(t, a) from Eq. 5, is set in the direction fo the gender bias, but the magnitudes of ∆(t, a) from Eq. 6 are not affected to a significant degree.
To generate a rough approximation of what training correlation rates could look like in this domain, we look to one of the datasets that BERT was trained on, the Wikipedia dataset. We do so by simply counting the cooccurrences of all the occupations along with gendered words such as “man”, “he”, “him”, etc. There are flaws with this approach because in a sentence like “She went to see the doctor.”, the pronoun is in fact not referring to the gender of the person with the occupation. However, we leave a more accurate measurement of this to future work, as our aim for showing these results was more for demonstrative purposes illustrating the manipulation of the correlation rate, rather than in rigorously measuring the training correlation rate.
We use 32 rather than 40 occupations in WinoBias Zhao et al. (2018), because when we went to the 2016 U.S. Labor Force Statistics data (of Labor Statistics, 2016) to collect the actual numbers of each gender and occupation in order to be able to calculate P (T = t|A = a), since WinoBias only had P (A = a|T = t), we found 8 occupations to be too ambiguous to be able to determine the actual numbers. For example, for “attendant”, there were many different attendant jobs listed, such as “flight attendants” and “parking lot attendant”, so we opted rather to drop these jobs from the list of 40. The 8 from the original WinoBias dataset that we ignored are: supervisor, manager, mechanician, CEO, teacher, assistant, clerk, and attendant. The first four are biased towards men, and the latter four towards women, so that we did not skew the distribution of jobs biased towards each gender. | 1. What is the focus of the paper regarding bias amplification?
2. What are the strengths of the proposed approach, particularly its benefits as stated in Sec 3.4?
3. What are the concerns regarding the recommended confidence intervals and causal notations?
4. How does the reviewer assess the presentation, experiments, and analysis of the paper?
5. What is the overall opinion of the reviewer about the paper's value in interpreting bias across various use cases? | Review | Review
Summary: The paper proposes a new metric for quantification of bias amplification that aids in interpretability of models. In particular, the paper offers useful insights concerning the meaning of bias amplification across varying contexts or use cases and prescribes the use of confidence intervals for better validation of various fairness metrics.
Positives:
Technical Sophistication The paper throughly examines the shortcomings of a previous metric used for computing bias amplification and proposes a new metric that overcomes the limitation of prior metric while still retaining the desirable properties. In particular, the authors clearly state the benefits of their method as described in Sec 3.4.
Presentation: There is sufficient clarity in the paper facilitating easy comprehension.
Experiments and analysis: This is the biggest strength of the paper. A thorough investigation makes this paper compelling. The authors also discuss the limitations of the proposed metric. The running example (fig 1 ) considered in the paper is interesting, especially given that such stereotypes are not considered often.
Concerns:
The recommendation for confidence interval is justifiable, but lacks a proper grounding.
The authors use causal notations for bias amplifications with the arrows from A to T or vice versa. While it is ok in terms of notation, it raises the question of how causal claims can be justified by merely thresholding as opposed to interventions . In this context, more elaboration of the metric (Sec 3.3) would be useful.
Overall comments:
I enjoyed reading this paper, and it offered some interesting insights. I believe it will be valuable in analysis of bias across different use cases in a manner that is interpretable. |
ICLR | Title
Tackling Diverse Tasks via Cross-Modal Transfer Learning
Abstract
Fine-tuning large-scale pretrained models has led to remarkable progress in wellstudied modalities such as vision and NLP. However, similar gains have not been observed in many other tasks due to an assumed lack of relevant pretrained models for these diverse modalities. In this work, we revisit this assumption by studying the cross-modal transfer ability of large-scale pretrained models. We introduce ORCA, a general cross-modal fine-tuning workflow that enables fast and automatic exploitation of existing pretrained models for diverse tasks. ORCA achieves taskspecific adaptation by performing data alignment before fine-tuning: it learns an embedding network that minimizes the optimal transport dataset distance between the end-task data and the pretraining data to close the modality gap. Through extensive experiments, we show that ORCA is the first viable approach that allows practitioners to use pretrained models to outperform hand-designed, AutoMLsearched, and general-purpose architectures—ORCA obtains state-of-the-art results on 10 of 13 diverse tasks we evaluate and ranks among the top three on the others. We shed light on why cross-modal transfer works by quantifying the importance of data alignment and highlight ORCA’s utility for data-limited domains.
1 INTRODUCTION
The success of machine learning (ML) in vision and natural language processing (NLP) has spurred its application beyond these traditional ML domains to diverse tasks such as solving partial differential equations (Li et al., 2021b), music modeling (Lewandowski et al., 2012), detecting cardiac disease (Hong et al., 2020), and many others. However, progress in these less-explored areas can be challenging due to (1) limited amounts of labeled data, (2) high computational cost and human effort for developing models from scratch, and (3) a lack of relevant large-scale pretrained models, which have in many cases obviated the first two issues in vision and NLP (e.g., Devlin et al., 2019; Carion et al., 2020; Dosovitskiy et al., 2021; Liu et al., 2021b; Radford et al., 2021).
There are two common approaches for practitioners to handle these issues: automated machine learning (AutoML) techniques (e.g., Roberts et al., 2021; Shen et al., 2022) that focus on designing task-specific networks in a data-efficient manner; and multimodal general-purpose methods that either propose flexible architectures applicable to various tasks (Jaegle et al., 2022a) or expand the set of modalities for which pretrained models exist (e.g., Reed et al., 2022; Lu et al., 2022a). However, both classes of approaches require training from scratch when applied to a new modality and proceed under the assumption of a lack of relevant pretrained models for these diverse problems.
In this work, we re-examine this assumption by considering the general problem of cross-modal transfer. Our goal is to exploit existing large-scale pretrained models in data-rich modalities for solving diverse downstream tasks. A few recent works have demonstrated the potential promise of cross-modal transfer by applying language transformers to vision (Kiela et al., 2019; Dinh et al., 2022; Lu et al., 2022b), referential games (Li et al., 2020c), and reinforcement learning (Reid et al., 2022). However, many of these approaches are ad-hoc (e.g., rely on manual prompt engineering or hand-craft new architecture components to solve specific tasks), and none of them yield models competitive with those trained from scratch. We tackle both shortcomings in our work.
We introduce a general-purpose, cross-modal transfer workflow called ORCA (Optimal tRansport Cross-modal Adaptation) that yields state-of-the-art results on a wide range of non-text and nonvision problems using pretrained transformers (Figure 1). Our key insight is to align the feature
distribution of an unfamiliar, out-of-modality dataset with that of a familiar, in-modal dataset before fine-tuning. This data alignment process not only prevents distortion of pretrained weights but also enables cross-modal knowledge transfer, as we will show via extensive experiments in Section 4. Concretely, for any downstream task, we first generate an embedding network that maps the (potentially high-dimensional) inputs to sequence features. Then, we train it to minimize the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) between the feature-label distribution of the target data and data from the pretraining domain1. Finally, we fine-tune the pretrained model and the embedding network. Using OTDD allows us to relax many distributional assumptions required by traditional domain adaptation and perform data alignment using both the feature and label information of the target data. However, we show in an ablation study in Section 4.2.1 that substituting OTDD with other distance metrics, such as maximum mean discrepancy (MMD) (Gretton et al., 2012), can also aid cross-modal transfer, albeit to a lesser extent. This implies that it is the general idea of first-align-then-fine-tune that enables ORCA to obtain significantly better results than previous cross-modal learning methods that rely on vanilla fine-tuning (Lu et al., 2022b).
We evaluate ORCA on a diverse set of 13 tasks with different input dimensions (1D and 2D), prediction types (point and dense), and modalities (vision, audio, electrocardiogram, physics, protein, genomics, cosmic-ray, and music). ORCA outperforms various competitors, including task-specific hand-designed architectures, leading AutoML methods, and general-purpose models, ranking first on 10 tasks and in the top three on all tasks. We compare ORCA with existing fine-tuning techniques and confirm that effective cross-modal transfer is only enabled by ORCA’s feature alignment process. We further reveal an empirical correlation between the alignment quality and the downstream performance. Finally, we demonstrate ORCA’s efficacy for limited-data tasks. Overall, our work not only explores the cross-modal transfer ability of pretrained models, but also establishes a practical workflow for solving diverse prediction problems efficiently and automatically.
2 RELATED WORK
In this section, we review several groups of related work in the areas of AutoML, in-modal transfer learning (unimodal domain adaptation, unimodal/multimodal fine-tuning, and general purpose methods), and cross-modal transfer learning (heterogeneous domain adaptation, task-specific finetuning, and FPT). Table 1 summarizes these groups along relevant axes, and contrasts them to ORCA.
AutoML for diverse tasks is a growing research area, as evidenced by the NAS-Bench-360 benchmark (Tu et al., 2022), along with several recent neural architecture search (NAS) methods that target this problem, e.g., AutoML-Zero (Real et al., 2020), XD (Roberts et al., 2021), and DASH (Shen et al., 2022)). In contrast to these NAS methods, ORCA takes a transfer learning approach in order to leverage existing pretrained models from data-rich modalities for more esoteric tasks, rather than repeatedly incurring the overhead of designing new architectures and training them from scratch. That said, given the shared underlying motivation, our experimental evaluation makes use of the diverse
1We do not assume access to the pretraining data due to practical concerns about data access and computational efficiency. We instead work with publicly available proxy data from the pretraining modality, e.g., CIFAR-10 for models pretrained on ImageNet and CoNLL-2003 for models pretrained on larger text corpora.
tasks comprising NAS-Bench-360, and compares ORCA with its expert and AutoML baselines. We also compare against DASH, the state-of-the-art method on this benchmark.
Unimodal domain adaptation (DA) is a form of transductive transfer learning where the source and target tasks are the same but the domains differ (Pan & Yang, 2009; Wang & Deng, 2018). Many DA methods assume that the target data has the same input space and support as the source data, and are concerned with problems where the output spaces and the joint/marginal distributions differ, such as covariate and label shifts. Recent work considers more general settings such as different feature spaces (heterogeneous DA) or label spaces (universal DA). Our focus on cross-modal transfer goes one step further to the case where neither the input-space nor the output-space support overlaps.
Unimodal fine-tuning is a more flexible transfer approach that can be applied to downstream tasks with different label spaces or input spaces. Pretrained models are used for in-modality fine-tuning in NLP (e.g., Aghajanyan et al., 2021; Jiang et al., 2020), vision (e.g., Wei et al., 2022; Li et al., 2022), speech (e.g., Chen et al., 2022; Jiang et al., 2021), protein sequences (Jumper et al., 2021), and robotics (Ahn et al., 2022). Adapter networks (He et al., 2022) have been developed to improve the downstream performance of in-modality transfer. Multimodal fine-tuning expands the applicable modalities of a single pretrained model by learning embeddings of several data-rich modalities together (e.g., Lu et al., 2019; Radford et al., 2021; Hu & Singh, 2021; Kim et al., 2021; Alayrac et al., 2022). However, these approaches still focus on solving in-modality downstream tasks.
General-purpose models propose flexible architectures applicable to various tasks such as optical flow, point clouds, and reinforcement learning (Jaegle et al., 2021; 2022a; Reed et al., 2022). These approaches train multitask transformers from scratch using a large body of data from different tasks. Though more versatile than unimodal models, they still focus on transferring to problems within the pretraining modalities considered. Nonetheless, the success of transformers for in-modality finetuning motivates us to focus on adapting transformer-type architectures for cross-modal transfer.
Heterogeneous DA (HDA) considers nonequivalent feature spaces between the source and target domains. While most HDA methods are developed for same-modality-different-dimension transfer, e.g., between images of different resolutions, there are indeed a few works studying cross-modal tasks such as text-to-image (Yao et al., 2019; Li et al., 2020b). However, a crucial assumption that HDA makes is that the target and source tasks are the same. Thus, we operate in a much more flexible setting and consider knowledge transfer between drastically different domains with distinct tasks and label sets, such as applying Swin Transformers (Liu et al., 2021c) to solving partial differential equations or RoBERTa to classifying satellite images and electrocardiograms.
Cross-modal, task-specific fine-tuning is a recent line of research, with most work focusing on transferring NLP models to other modalities like vision (Kiela et al., 2019), referential games (Li et al., 2020c), and reinforcement learning (Reid et al., 2022). These works provide initial evidence of the cross-modal transfer capacity of pretrained models. However, they focus on hand-tailoring to a single modality, e.g., by adding ad-hoc encoders that transform agent messages (Li et al., 2020c) or decision trajectories (Reid et al., 2022) into tokens. Even when not relying on fine-tuning, work like LIFT (Dinh et al., 2022) that attempts cross-modal learning via prompting (Liu et al., 2021a) still require ad-hoc conversion of tasks to natural text.
Frozen Pretrained Transformers (FPT) (Lu et al., 2022b) is a general cross-modal fine-tuning workflow that transforms input features to be compatible with the pretrained models. Although FPT and ORCA are both general-purpose workflows, FPT does not account for differences between the target and pretraining modalities, which we show is necessary to achieve accurate predictive models and outperform existing baselines.
3 ORCA WORKFLOW
In this section, we first formalize the problem setup and then introduce the ORCA workflow for adapting pretrained transformers to diverse end tasks.
Problem Setup. A domain D consists of a feature space X , a label space Y , and a joint probability distribution P (X ,Y). In the cross-modal setting we study, the target (end-task) domain Dt and source (pretraining) domain Ds differ not only in the feature space but also the label space and by extension have differing probability distributions, i.e., X t ̸= X s, Yt ̸= Ys, and P t(X t,Yt) ̸= P s(X s,Ys). This is in contrast to the transductive transfer learning setting addressed by domain adaptation, where source and target domains share the label space and end task (Pan & Yang, 2009).
Given target data {xti, yti}i∈[nt] sampled from a joint distribution P t in domain Dt, our goal is to learn a model mt that correctly maps each input xt to its label yt. We are interested in achieving this using pretrained transformers. Thus, we assume access to a model ms that has been trained with data {xsi , ysi }i∈[ns] in the source domain Ds, where (xsi , ysi ) ∼ P s. Then, given a predefined loss function l, we aim to develop mt based on ms such that L(mt) = E(xt,yt)∼P t [l(mt(xt), yt)] is minimized. This problem formulation does not define modality explicitly and includes both inmodal and cross-modal transfer. Given the generality of the tasks we wish to explore, it is hard to provide a precise mathematical definition, so we rely on semantics to differentiate the two settings: intuitively, cross-modal domains (e.g., natural images vs. protein sequences) are more distinct to each other than in-modal domains (e.g., photos taken in two different geographical locations).
Having defined the learning problem, we now present our three-stage cross-modal transfer workflow: (1) architecture design to support diverse input-output dimensions, (2) embedder pretraining to align the source and target feature distributions, and (3) fine-tuning to minimize the target task loss.
3.1 TASK-SPECIFIC ARCHITECTURE DESIGN
Applying pretrained models to another downstream problem usually requires addressing the problem of mismatched dimensions. To make ORCA work for input and output tensors of different dimensions, we decompose a transformer-based learner m into three parts (Figure 1 stage 1): an embedder f that transforms input x into a sequence of features, a model body g that applies a pretrained transformer (i.e., series of attention layers) to the embedded features, and a predictor h that generates predictions with the desired output shape. ORCA uses pretrained architecture and weights to initialize the model body g but replaces f and h with layers designed to match the target data with the pretrained model’s embedding dimension. Next, we describe each module in detail.
Custom Embedding Network. Denote the feature space compatible with the pretrained model body as Ẋ . For a transformer with maximum sequence length S and embedding dimension D, Ẋ = RS×D. The embedding network f : X → Ẋ is designed to take in a tensor of arbitrary dimension from X and transform it to the feature space Ẋ . In ORCA, f is composed of a convolutional layer with input channel cin, output channel cout, kernel size k, and stride k, generalizing the patching operations used in vision transformers to 1D and higher-dimensional cases. We set cin to the input channel of x and cout to the embedding dimension D. To take full advantage of the representation power of the pretrained model, we choose the smallest k for which the product of output shape excluding the channel dimension ≤ S. That is, when we flatten the non-channel dimensions of the output tensors after the convolution, pad and then transpose it, we can obtain sequence features with shape S ×D. Finally, we add a layer norm and a positional embedding to obtain ẋ.2
Pretrained Transformer Body. The model body g takes the embedding ẋ ∈ Ẋ as input and outputs features ẏ ∈ Ẏ; the dot is used to differentiate these intermediate representations from the raw inputs and labels. For transformer-based g, both the input and output feature spaces Ẋ , Ẏ are RS×D.
Custom Prediction Head. Finally, the prediction head h must take ẏ ∈ Ẏ as input and return a task-dependent output tensor. Different tasks often specify different types of outputs, e.g., classification tasks require logits in RK where K is the number of classes, and dense prediction tasks require dense maps with the same spatial dimension as the input and per index logits correspond-
2As a concrete example, consider an image tensor with shape (Cin, Hin,Win). We first choose stride k for the convolution such that Hout × Wout ≈ S to get an output tensor with shape (D,Hout,Wout). Then, we flatten it to shape (D,Hout ×Wout), pad along the last dimension to shape (D,S), and transpose.
ing to K classes. Thus, it is crucial to define task-specific output modules and fine-tune them when transferring to new tasks. In our workflow, we use the simplest possible instantiation of the predictor modules. For classification, we apply average pooling along the sequence length dimension (or take the classification token of language models) to obtain 1D tensors with length D and then use a linear layer that maps D to K. For dense prediction, we apply a linear layer to the sequence outputs so the resulting tensor has shape (S, kdim(Y)−1K), where kdim(Y)−1 is the downsampling factor of the embedder convolution kernel with stride k. This upsamples by the same factor that the embedder convolution downsampled. Then, we can mold the tensor to the desired output dimension.3
With an architecture based on the pretrained model but is also compatible with our target task, we can now turn our attention to pretraining the embedder via matching source and target distributions.
3.2 EMBEDDING LEARNING DOR DATA ALIGNMENT
Intuitively, transferring knowledge across similar modalities should be easier than across distant ones. Hence, given a target task in a new modality, we aim to manipulate the task data so that they become closer to the pretraining modality. We use the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) to measure the closeness between datasets in different domains. Unlike the classic OT distance which operates only on the feature distributions, OTDD considers both the feature and the label information and can work even if the label sets are unrelated or disjoint. Thus, while OT is mostly used for unsupervised or semi-supervised domain adaptation (Courty et al., 2017; Yan et al., 2018), OTDD is particularly suitable for distance estimation between cross-modal labeled datasets. In the following, we will briefly explain how ORCA uses OTDD. We refer the readers to Alvarez-Melis & Fusi (2020) for a detailed exposition on the metric itself.
Formally, let fs : X s → Ẋ denote the pretrained embedder (the part of ms that transforms the source data to sequence features) and f t : X t → Ẋ be a randomly initialized target embedder with architecture discussed in the previous section. We train f t to minimize the expected OTDD between the embedding-label distributions ( f t(xt), yt ) and ( fs(xs), ys ) . That is, for both datasets,
we first represent each class label as a distribution over the in-class features: y 7→ P (Ẋ |Y = y). This transforms the source and target label sets into the shared space of distributions over Ẋ . Then, we can define the distance dY(yt, ys) between different labels using the p-Wasserstein distance associated with a metric dẊ over the feature space, e.g., the l2 distance ∥ẋt − ẋs∥22. This allows us to measure the difference between distributions in Ẋ ×Y using the following p-Wasserstein metric:
dẊ×Y ( (ẋt, yt), (ẋs, ys) ) = ( dẊ (ẋ t, ẋs)p + dY(y t, ys)p )1/p . (1)
Plugging this into the OT formulation leads to the OTDD over Ẋ ×Y , which we optimize to learn f t. Leveraging the clustering structure of the datasets, OTDD provides a better distance estimation between the target and source data, demonstrating better alignment ability in practice. In Section 4.2.1, we examine several distance metrics for learning the embedder. While data alignment generally improves downstream performance for all metrics, OTDD leads to the best empirical results.
As for the computational cost of embedding learning, we analyze the complexity of OTDD in Appendix A.1 and show that this stage takes much less time than the later fine-tuning stage in Appendix A.5. Thus, ORCA achieves a significant performance gain at a low cost for feature alignment.
3.3 FINE-TUNING FOR DOWNSTREAM ADAPTATION
After training the embedder, we perform full fine-tuning by updating all model parameters to minimize the target loss. This step further aligns the embedder and predictor with the pretrained model to improve downstream performance. We perform an ablation study comparing ORCA to standard fine-tuning without feature matching in Section 4.2.1 and show that our approach improves prediction accuracy and reduces performance variance. There are orthogonal lines of work that study how to best fine-tune a pretrained model (e.g., Liu et al., 2022; He et al., 2022). We compare with one strategy used in FPT (Lu et al., 2022b) in Section 4.2.2 but leave further exploration for future work.
3As a concrete example, for an image tensor with embedding convolution kernel size k, the linear layer will yield an output of shape (S, k2K), which we transpose, pad, and reshape to (k2K,Hout,Wout). Finally, we apply pixelshuffle (Shi et al., 2016) to get an output of shape (K,Hin,Win).
4 EXPERIMENTS
Having introduced how ORCA addresses the dimension mismatch between the target and source datasets via architecture design and tackles the distribution mismatch via embedder learning, we now proceed with showing its empirical effectiveness. In the following, we will demonstrate that ORCA is the first approach that allows practitioners to obtain models better than hand-designed, AutoMLsearched, and general-purpose architectures on a variety of diverse tasks. Then, we will analyze key components of ORCA to better understand the mechanism underlying cross-modal transfer.
Experiment Protocol. While our workflow accepts a wide range of pretrained transformers as model bodies, we use RoBERTa (Liu et al., 2019b) and Swin Transformers (Liu et al., 2021c), which are representatives of the most studied language and vision modalities, to exemplify ORCA’s efficacy. We implement the base models, which have around 100 million parameters, and use the pretrained weights available in the Hugging Face transformers library (Wolf et al., 2019). As stated in the introduction, we do not use the exact pretraining data to represent the source modalities because they are often not publicly available and can be too large to compute OTDD efficiently. We use the proxy datasets: CoNLL-20034 for RoBERTa and CIFAR-10 for Swin.
For each task, we first apply the hyperparameter tuning algorithm ASHA (Li et al., 2020a) to the standard fine-tuning baseline (“Fine-tuning” in Table 3) to identify suitable batch size, optimizer, learning rate, and weight decay. These hyperparameters are then applied to all fine-tuning baselines as well as ORCA. During embedder learning, while classification tasks naturally come with discrete labels required for computing OTDD, for dense prediction tasks where labels are high-dimensional maps, we perform clustering on the dense maps to generate pseudo labels, which not only preserves the intrinsic distribution of the target data but also speeds up OTDD computation. We manage our experiments using the Determined AI platform. All experiments are performed on NVIDIA V100 GPUs and results are averaged over 5 random seeds. For other experiment details, see Appendix A.3.
4.1 CAN PRETRAINED MODELS TRANSFER ACROSS MODALITY TO SOLVE DIVERSE TASKS?
In this section, we highlight the most important observation of this work: cross-modal fine-tuning with ORCA can solve a variety of tasks effectively and efficiently. To demonstrate this, we evaluate ORCA on 13 tasks detailed below. We first include 10 tasks from NAS-Bench-3605, which covers problems such as PDE solving, protein folding, and cardiac disease detection. This benchmark contains tasks for 1D and 2D classification, 2D dense prediction, but not 1D dense prediction, so we added JSB Chorales, a music modeling dataset widely used for evaluating recurrent networks (Chung et al., 2017; Bai et al., 2018). We also added ListOps (parsing math expressions) (Tay et al., 2021) and Homology (classifying protein structure) (Rao et al., 2019) for comparison with FPT. Together, these 13 tasks represent a wide collection of modalities for comprehensive evaluation.
Following the taxonomy in Table 1, we consider three classes of baselines: (1) hand-designed expert architectures for each task, as identified by Tu et al. (2022), Rao et al. (2019), and Tay et al. (2021); (2) generalpurpose models, as represented by Perceiver IO (Jaegle et al., 2022b); and (3) AutoML baselines, as represented by those evaluated in NAS-Bench-360 and DASH (Shen et al., 2022). We will compare with FPT later, the only remaining approach with a general workflow from Table 1.
In Table 2, we report the prediction error for each method on each task. ORCA achieves the lowest error rate on 10 of 13 tasks and is the most effective in terms of aggregated performance. This is also supported by the performance summary in Figure 2. More specifically, we outperform all hand-designed architectures on all tasks except ECG, where we rank second but do much better than the other
4CoNLL-2003 is for named entity recognition. It is used to interpret language models (Jawahar et al., 2019). 5NAS-Bench-360 is designed for testing how well ML algorithms can generalize and is a core component
of the 2022 AutoML Decathlon competition. For a summary of included tasks, see Table 6 in the Appendix
methods. We also beat all AutoML baselines on all tasks except DeepSEA and NinaPro, where ORCA is second and third, respectively. The improvements from ORCA come at a small computational overhead associated with pretraining the embedder to match the source and target modalities. Table 5 in the Appendix shows the time needed for embedder learning with OTDD, which is a small portion (10.2% on average) of the fine-tuning time. ORCA’s efficiency and its state-of-the-art results on 10 tasks make it a practical tool for model development in diverse areas.
Our experiments further validate the findings in Lu et al. (2021) that pretrained transformers can learn knowledge transferable to seemingly unrelated tasks. In the following, we delve into the mechanism of ORCA to provide intuition for necessary components of successful cross-modal learning.
4.2 KEY FACTORS FOR SUCCESSFUL CROSS-MODAL TRANSFER
Here, we dissect the success of cross-modal transfer with ORCA through a series of ablation studies. As a preview, we identify three aspects to be key to its success: data alignment through embedding learning, full fine-tuning of all model weights, and suitable pretrained model selection.
4.2.1 MATCHING FEATURE DISTRIBUTIONS HELPS ADAPTATION
Table 2 shows that ORCA leads to effective transfer using the proposed three-stage workflow. However, as learning the embedder via OTDD is an instantiation of the general first-align-then-fine-tune paradigm, we ask the question: does cross-modal transfer work because of the specific application of OTDD or the core approach of data alignment across modalities?
To answer this question, we perform an ablation study on the embedding learning metrics and compare their performance to fine-tuning without embedder pretraining. We experiment with OTDD, maximum mean discrepancy (MMD) (Gretton et al., 2012), the mutual-informationbased TransRate (Huang et al., 2022), and Euclidean distance. The performance profile is in shown Figure 3, and the detailed results are shown in Appendix A.4. We highlight the following observations. First, bringing the target modality closer to the pretraining modality generally aids cross-modal transfer, regardless of which metric we minimize. This is evident from the fact that pretraining the embedder with any of the metrics can outperform vanilla fine-tuning without embedder learning on many
tasks. Second, among the evaluated metrics, OTDD leads to the best overall performance. This is why we use it in our workflow. The middle rows of Table 3 demonstrate that ORCA with OTDD consistently outperforms naive fine-tuning. This supports our argument that closing the gap between a new modality and the pretraining modality can facilitate a model’s adaptation to a new task.
To further isolate the impact of data alignment, we compare ORCA with a train-from-scratch baseline, which trains RoBERTa and Swin using only the target data for the same number of epochs. Table 3 shows that train-from-scratch is better than fine-tuning but worse than ORCA on many tasks like Satellite and DeepSEA, indicating that when the target modality differs significantly from the pretraining modality, naive fine-tuning may harm transfer, but aligning the feature distribution using ORCA can resolve this issue and benefit transfer. Indeed, recent work has shown that optimizing directly for the task loss may distort the pretrained weights and lead to suboptimal solutions (Kumar et al., 2022; Lee et al., 2022). By manipulating the target data distribution to look like the source distribution, we can lower the risk of catastrophic forgetting, which may explain the success of ORCA.
Lastly, we perform experiments to quantify the effect of data alignment from a task-wise perspective. We train the embedder for different number of epochs before fine-tuning to see how optimizing OTDD to various levels of convergence affects downstream performance. Figure 4 plots the finetuning accuracy along with the final OTDD objective for different levels of embedder pretraining. Evidently, as the dataset distance decreases, the final fine-tuning accuracy increases. This correlation supports the effectiveness of embedder learning for cross-modal transfer. In addition, we observe that learning the embedder prior to fine-tuning can stabilize training, as the performance variance of ORCA is consistently lower than that of standard fine-tuning.
4.2.2 FINE-TUNING ALL MODEL PARAMETERS FOR CROSS-MODAL TASKS
As discussed in Section 2, Frozen Pretrained Transformers (FPT) (Lu et al., 2022b) is a related work that showed pretrained language models contain knowledge relevant for out-of-modality tasks. While FPT presented a general pipeline that transfers GPT-2 to tasks like CIFAR-10, Homology, and ListOps, the resulting models were not as good as those directly trained on the target data. FPT differs from ORCA in that (1) it does not pretrain an embedder for task-specific adaptation and (2) it only fine-tunes the layer norms. We have already verified the importance of (1). Now, to isolate the impact of (2), we evaluate ORCA with fine-tuning the layer norms vs. FPT on our task set.
The bottom rows of Table 3 show ORCA with fine-tuning just the layer norms outperforms FPT, indicating pretraining the embedding layers boosts the cross-modal performance of FPT. However, this performance gain is smaller than that seen in the full fine-tuning setting, which implies that full fine-tuning can take better advantage of the learned embeddings. Also, partial fine-tuning is less effective than full fine-tuning on all tasks except for DeepSEA. This exception might be due to the fact that full fine-tuning without learned embeddings is more prone to overfitting. In terms of runtime, FPT only results in less than 2x speedups compared with full fine-tuning (see Appendix A.5), despite the fact that we are updating significantly fewer parameters. This is unsurprising since gradients are still back-propagated through the entire network. Therefore, when computation allows, we recommend using ORCA with full fine-tuning for better downstream performance.
4.2.3 PRETRAINING MODALITY CAN AFFECT TRANSFER PERFORMANCE
Finally, we study how the pretraining modality affects fine-tuning performance. For experiments in Table 2, we chose pretrained models for each task based on the input dimension, i.e., we use RoBERTa for all 1D tasks and Swin for all 2D tasks. Now, we can switch the model bodies and apply ORCA. This is easy to implement because ORCA is model-agnostic and the embedder architec-
ture handles all necessary input transformation to obtain sequence features. As shown in Table 4, fine-tuned RoBERTa outperforms fine-tuned Swin on the 1D task, and the final OTDD objective for RoBERTa is also smaller than that of Swin. We hypothesize that this is because the considered DeepSEA data (genomics sequences) are structured more like language than images with discrete units of information and general grammatical rules. The FPT paper observes a similar trend for Homology. As for the 2D tasks, we again notice that models with better fine-tuning accuracy have smaller OTDDs. This suggests a way of selecting pretrained models from a predefined model hub for each task, e.g., by comparing the optimized OTDDs and picking the one with the smallest value.
Case Study: Low-Data Regime. Now that we have a better understanding of ORCA, recall that one of our motivations for transferring pretrained models to various modalities is to help task-solving in data-limited regimes, where training models from scratch can be challenging. To this end, we investigate whether ORCA can facilitate fine-tuning large-scale models on small target datasets.
Indeed, for vanilla fine-tuning, a small amount of data may not give enough signal to update the pretrained weights. However, it is possible to obtain a good feature embedder with the same amount of data using ORCA, which can then make fine-tuning easier. In Figure 5, we vary the amount of target data and plot the performance of ORCA and vanilla fine-tuning. The performance gain of ORCA increases as the amount of data used decreases. This shows that fine-tuning does suffer from limited data, but ORCA can considerably alleviate the problem and improve downstream performance. Moreover, ORCA allows us to use a third of the data to match the performance of standard fine-tuning. Thus, it can benefit model development in domains where data collection is costly.
Discussion and Future Work. We identify several future directions based on our experiment results. First, it is worth studying the effect of pretraining modality further and develop a systematic way of selecting pretrained models. Then, we can incorporate model selection into ORCA for a more automated transfer pipeline. Second, while ORCA leverages the simplest fine-tuning paradigm, we believe it is possible to combine it with more sophisticated transfer techniques such as adapters (He et al., 2022). We briefly study how prompting (Bahng et al., 2022; Jia et al., 2022) can be applied to diverse tasks in Appendix A.6 and find that it is in general less effective for out-of-modality problems, so we can possibly boost its performance using ORCA. Lastly, we currently evaluate ORCA on diverse 1D/2D tasks and in-modality vision tasks (Appendix A.7). It is also important to validate it on more settings, such as high-dimensional problems and reinforcement learning (Reid et al., 2022).
5 CONCLUSION
In this paper, we argue that an important step towards developing more general ML methods is to study how we can reuse existing models effectively for new and less-explored tasks. To this end, we propose a novel framework that allows transferring pretrained transformers to distinct downstream modalities. Our method, ORCA, can map target data from an arbitrary end task’s modality to a model’s pretraining modality to improve fine-tuning performance. We believe that this work not only signals the potential of large-scale pretraining for diverse tasks but also lays out a path for a largely uncharted data-centric paradigm in machine learning.
A APPENDIX
A.1 EMBEDDING LEARNING WITH OPTIMAL TRANSPORT DATASET DISTANCE
A.1.1 LITERATURE REVIEW
Due to the limited space, we do not give a full review of the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) in the main text. Here, we briefly recall the optimal transport (OT) distance and explain OTDD in detail.
Consider a complete and separable metric space X and let P(X ) be the set of probability measures on X . For α, β ∈ P(X ), let Π(α, β) be the set of joint probability distributions on X × X with marginals α and β in the first and second dimensions respectively. Then given a cost function c(·, ·) : X × X → R+, the classic OT distance with cost c is defined by:
OTc(α, β) := min π∈Π(α,β) ∫ X×X c(x, y)dπ(x, y). (2)
When X is equipped with a metric dX , we can use c(x, y) = dX(x, y)p for some p ≥ 1 and obtain the p-Wasserstein distance, Wp(α, β) := (OTdpX (α, β)) 1 p .
Now consider the case of finite datasets with features in X and labels in a finite set Y . Each dataset can be considered a discrete distribution in P(X × Y). To define a distance between datasets, a natural approach is to define an appropriate cost function on Z := X × Y and consider the optimal transport distance. Indeed, for any metric dY on Y and any p ≥ 1, Z can be made a complete and separable metric space with metric
dZ((x, y), (x ′, y′)) = (dX (x, x ′)p + dY(y, y ′)p) 1 p (3)
It is usually not clear how to define a natural distance metric in Y , so instead we proceed by representing each class y ∈ Y by P (X|Y = y), the conditional distribution of features X given Y = y. More specifically, for a dataset D ∈ P(X × Y), denote this map from classes to conditional distributions by F (D, ·) : Y → P(X ). Then we can transform any dataset over X × Y into one over X × P(X ) via G(D) := (projX , F (D, projY )). As discussed above, Wp is a natural notion of distance in P(X ), so by substituting Y 7→ P(X ) and dY 7→ Wp in Equation 3, we can define the (p-)optimal transport dataset distance between datasets DA and DB by
OTDD(DA,DB) := OT (dpX×W p p ) 1 p (G(DA), G(DB)) (4)
A.1.2 COMPUTATIONAL CONSIDERATIONS
As we aim for a practical fine-tuning workflow, computational cost is a crucial concern. While Alvarez-Melis & Fusi (2020) proposed two variants of OTDD—the exact one and a Gaussian approximation, we observe from our experiments that optimizing the exact OTDD leads to better performance. In the following, we will focus on analyzing the computational cost of the exact OTDD.
Given datasets with D-dimensional feature vectors, estimating vanilla OT distances can be computationally expensive and has a worst-case complexity of O(D3 logD) (Pele & Werman, 2009). However, adding an entropy regularization term ϵH(π|α⊗β) to Equation 2, where H is the relative entropy and ϵ controls the time-accuracy trade-off, can be solved efficiently with the Sinkhorn algorithm (Cuturi, 2013). This reduces OT’s empirical complexity to O(D2) and makes the time cost for computing OTDD manageable for ORCA’s workflow.
During implementation of ORCA, we also observed memory issues for computing OTDD using the entire target and source datasets on GPUs. To alleviate this, we propose a class-wise subsampling strategy for approximating OTDD on GPUs (Algorithm 1). In short, we split the K-class target dataset into K datasets based on the labels and compute the class-wise OTDD between each singleclass target dataset and the entire source dataset. Each class-wise OTDD can be approximated with the average of batch samples similar to how stochastic gradient descent approximates gradient descent. After that, we approximate the OTDD between the target and source datasets using the
Algorithm 1 Efficient approximation of OTDD using class-wise subsampling.
Input: target dataset {xt, yt}, number of target classes Kt, source dataset S = {xs, ys}, subsample size b, subsample round R for each class i ∈ [Kt] in the target dataset do
weighted sum of the K class-wise OTDDs. To verify that the approximation works empirically, we track the approximated OTDD (computed on GPUs) and the actual OTDD (computed on CPUs) and visualize the loss curves during ORCA’s embedder learning process (Figure 6). We can see that the estimated value adheres to the actual value.
Leveraging both the Sinkhorn algorithm and class-wise approximation, the embedder learning process only takes up a small fraction of the total fine-tuning time in practice, as shown in Table 5. Hence, we invest a reasonable time budget but achieve significantly improved cross-domain transfer performance using ORCA.
A.2 INFORMATION ABOUT EVALUATION TASKS
A.3 EXPERIMENT DETAILS
Below, we summarize details for implementing ORCA and evaluating it on the selected 13 tasks. The code and configuration file for reproducing each experiment can be found in the supplementary material. We will also release ORCA’s best checkpoint for each task later.
A.3.1 PRETRAINED MODELS
We evaluated ORCA with two pretrained models in our experiments. In Table 2, for all 2D tasks including CIFAR-100, Spherical, Darcy Flow, PSICOV, Cosmic, NinaPro, and FSD50K, we use the following model. As Swin has a pretrained resolution, we reshape the inputs for our tasks to the resolution before feeding them into the model.
Name Pretrain Resolution Num Params FLOPS FPS
Swin-base (Liu et al., 2021c) ImageNet-22K 224×224 88M 15.4G 278
For all 1D tasks including ECG, Satellite, DeepSEA, JSB Chorales, ListOps,and Homology, we use the following model:
Name Pretrain Num Params FLOPS
RoBERTa-base (Liu et al., 2019b) Five English-language corpora 125M 1.64E20
We use the Hugging Face transformers library Wolf et al. (2019) to implement the pretrained models.
A.3.2 TASK DATA PREPARATION
For all the NAS-Bench-360 tasks, each dataset is preprocessed and split using the script available on https://github.com/rtu715/NAS-Bench-360, with the training set being used for hyperparameter tuning, embedding learning, and fine-tuning. We obtain the data processing script for JSB data from https://github.com/locuslab/TCN, for ListOps from https://github.com/kzl/universal-computation, and for Homology from https://github.com/songlab-cal/tape.
A.3.3 HYPERPARAMETER TUNING
As ORCA is both task-agnostic and model-agnostic, it can be applied to fine-tuning a variety of pretrained transformers on drastically different end tasks with distinct datasets. Hence, it is hard to define one set of fine-tuning hyperparameters for all (model, task) pairs. At the same time, optimizing large-scale pretrained transformers can be challenging due to their large model sizes, as the downstream performance depends largely on the hyperparameters used. For instance, using a large learning rate can distort pretrained weights and lead to catastrophic forgetting. Therefore, in our experiments, given a (model, task) pair, we first apply hyperparameter tuning using the Asynchronous Successive Halving Algorithm (ASHA) (Li et al., 2020a) to the standard fine-tuning setting (i.e., after initializing the embedder and predictor architectures, directly updating all model weights to minimize the task loss) to identify a proper training configuration. Then, we use the same set of hyperparameters found for all our experiments for the particular (model, task) combination. Note that even though we did not explicitly state this in the main text, the hyperparameter tuning stage can be directly integrated into the ORCA workflow between stage 1 and stage 2. In this sense, ORCA is still an automated cross-modal transfer workflow that works for diverse tasks and different pretrained models.
The configuration space for ASHA is as follows:
• Batch size: 32, 128, 512, 1024 for Swin; 16, 56, 256, 512 for RoBERTa
• Optimizer: SGD, Adam, AdamW
• Learning rate: 1E-2, 1E-3, 1E-4, 1E-5, 1E-6
• Weight decay: 1E-2, 1E-3, 1E-4, 1E-5, 1E-6
Note that to fit each experiment on a single GPU, we set a fixed batch size (32 for Swin and 16 for Roberta) and vary the gradient accumulation step instead of actually varying the batch size, but the effect is the same.
A.3.4 ORCA ONLY: EMBEDDING LEARNING WITH OTDD
After initializing the embedder architecture for each task, we train it to minimize the OTDD between the embedded target features and embedded source features.
For source datasets, we use CIFAR-10 for Swin and CONLL-2003 for RoBERTa. We sample 5000 data points to compute OTDD. In practice, we can pass the source data through the pretrained embedder once and save all the embedded features, so we don’t have to pay the cost of obtaining the source features each time we fine-tune a new model.
For classification tasks, we directly use the labels provided by the end task to compute OTDD. For dense tasks, we perform K-Means clustering on the target data to obtain pseudolabels for OTDD computation. The number of clusters is set to the number of classes of the source dataset, e.g., 10 for 2D tasks that use CIFAR-10 as the source dataset.
To compute the embedding learning objective, we use the OTDD implementation of the original paper provided here: https://github.com/microsoft/otdd. As for the hyperparameters, we use the batch size, learning rate, optimizer, and weight decay obtained from A.3.3. The others are fixed across different tasks:
• Embedding learning epochs: 60
• Learning rate scheduler: decay by 0.2 every 20 epochs
A.3.5 FINE-TUNING
Besides the searched hyperparameters, we also fix the following hyperparameters for fine-tuning.
• Fine-tuning epochs: 100 for Swin tasks, 60 for RoBERTa tasks • Learning rate scheduler: we use the linear decay with min lr = 0 and 5 warmup epochs
A.3.6 TRAIN-FROM-SCRATCH
This baseline is trained using the same hyperparameter configuration (number of epochs, batch size, learning rate, etc) as the fine-tuning baseline.
A.3.7 EVALUATION
When training/fine-tuning is finished, we evaluate the performance of all models following the NASBench-360 protocol. We first report results of the target metric for each task by running the model of the last epoch on the test data. Then, we report aggregate results via performance profiles (Dolan & Moré, 2002), a technique that considers both outliers and small performance differences to compare methods across multiple tasks robustly. In such plots, each curve represents one method. The τ on the x-axis denotes the fraction of tasks on which a method is no worse than a τ -factor from the best. The performance profile for our experiments is shown in Figure 2.
A.4 ABLATION STUDY ON EMBEDDING LEARNING METRICS
As motivated in Section 4.2.1, we present here an ablation study on the embedding learning metrics that we have considered for minimizing distribution dissimilarity. The results show that (1) performing feature alignment generally helps downstream adaptation, regardless of which metric we minimize; (2) OTDD leads to the best overall performance, so we chose it for our workflow. Our findings confirm that it is the general idea of data alignment, rather than a specific metric, that makes cross-modal transfer work.
Specifically, we experiment with OTDD, maximum mean discrepancy (MMD) (Gretton et al., 2012), the mutual-information-based TransRate (Huang et al., 2022), and (pairwise) Euclidean distance. We learn the embedders to minimize these metrics and then fine-tune the pretrained models. The test errors are as follows.
A.5 RUNTIME OF ORCA VS. FPT
In Table 3, we compare with the FPT setting, which only fine-tunes the layer norms of the pretrained transformer models. As we have shown already, the downstream performance of fine-tuning only a subset of the parameters is less competitive than fine-tuning all parameters. Below, we show that the time saved for updating only layer norms is also not that significant. Therefore, we suggest performing full fine-tuning when time and computational resources allow.
A.6 PROMPTING
Apart from fine-tuning, a new paradigm of working with large-scale pretrained models is prompting, i.e., we do not update the pretrained weights but only modify the input and query the model for the desired output. Existing language prompting methods (e.g., Liu et al., 2022) are generally not suitable for cross-modal learning due to the difficulty of designing natural prompts for diverse data types. For the 1D tasks we study, there is even no notion of “discrete tokens.” Another line of work studies visual prompting by modifying 2D inputs for querying vision transformers. We test two such algorithms, VP (Bahng et al., 2022) and VPT (Jia et al., 2022), on three classification tasks in our task suite. They are not applicable to the remaining tasks because either the inputs cannot be reshaped to look like images or the outputs are not classification logits.
We test VPT with the pretrained Swin-Base Transformer (the same model we used for ORCA) and VP with the pretrained ResNet-50 (as the official implementation does not support vision transformers). The results are shown in Table 9. In general, prompt tuning is less effective than fine-tuning, and the two baselines perform significantly worse than ORCA. This is not surprising given that prompting methods are more intuitively suited to in-modality transfer, where the target and the source data have similar structure or semantic meaning. However, when the target data (e.g., electromyography signals, as in the NinaPro dataset) is drastically different from image data, it is difficult to design prompts or expect good performance by only modifying the inputs without fine-tuning the pretrained models.
A.7 COMPATIBILITY WITH IN-MODALITY TRANSFER
A natural question to ask is whether ORCA can also tackle in-modality tasks. While we design ORCA to enable cross-modal transfer, we hypothesize that it should facilitate same-modality transfer if two domains have large dataset distance. To validate this, we test ORCA on DomainNet
datasets, which are commonly used to evaluate homogeneous DA methods (Peng et al., 2019). From Table 10, we can see that ORCA achieves significantly better performance than the fine-tuning baseline, which shows that the feature matching of ORCA can also help in-domain generalization. | 1. What is the main contribution of the paper regarding cross-modal transfer learning?
2. What are the strengths of the proposed approach, particularly in its universality and effectiveness?
3. What are the weaknesses of the paper, especially regarding the empirical results and the choice of using convolutional networks?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper shows an universal approach of handling transfer learning in terms of cross-modal tasks, by using pre-training the embedding layers using OTDD loss metric. This paper although conduct empirical study on 13 tasks, demonstrating the effectiveness of embedding layer pretraining in cross modality learning. This approach is universal, effective and beats counterparts like NAS solutions, handcraft models, and other cross modal transfer tasks by quite some margin.
Strengths And Weaknesses
Strengthes The paper proposes an general/universal approach to do cross modal transfer learning, by adding a dimension transformation embedding layer and pretrain it using OTDD The paper describe the motivation, challenges and solution clearly, and there are enough empirical results supporting the claim. The comparisons against other cross modal transfer solution is convincing, e.g from NAS based class, hand-crafted expertise class. The paper did some deep dives into the effectiveness of transformation embedding pretaining.
Weaknesses Some of the empirical results and statements requires better reasoning, e.g: Why just pretraining the embedding transformation layers will make the final result even better than hand-crafted expertise modes by a big margin in some downstream tasks? like Spherical and JSB Chorales. Why using OTDD when pretraining the transformation embedding layers, the metric and semantic goal is not connected explicitly.
There are some places requires more deep diving, e.g: Using convolutional network to do dimension transferring from target X -> source X, using it for Orca-swin makes sense because they both belong to vision modeling, but why still use it for Orca-roberta? Why convolutional network is selected, and how does it compare to simple feed-forward network for dimension transformation, their comparison in down stream tasks, etc, to demonstrate the effectiveness of this choice
It is good to show both Orca-swin/roberta results in Table 2, let people understand the impact of pretrained body as well.
Clarity, Quality, Novelty And Reproducibility
The paper is well-written 3/4, clarity/easy to understand 2/4, some ideas needs better reasoning and novelty 3/4. |
ICLR | Title
Tackling Diverse Tasks via Cross-Modal Transfer Learning
Abstract
Fine-tuning large-scale pretrained models has led to remarkable progress in wellstudied modalities such as vision and NLP. However, similar gains have not been observed in many other tasks due to an assumed lack of relevant pretrained models for these diverse modalities. In this work, we revisit this assumption by studying the cross-modal transfer ability of large-scale pretrained models. We introduce ORCA, a general cross-modal fine-tuning workflow that enables fast and automatic exploitation of existing pretrained models for diverse tasks. ORCA achieves taskspecific adaptation by performing data alignment before fine-tuning: it learns an embedding network that minimizes the optimal transport dataset distance between the end-task data and the pretraining data to close the modality gap. Through extensive experiments, we show that ORCA is the first viable approach that allows practitioners to use pretrained models to outperform hand-designed, AutoMLsearched, and general-purpose architectures—ORCA obtains state-of-the-art results on 10 of 13 diverse tasks we evaluate and ranks among the top three on the others. We shed light on why cross-modal transfer works by quantifying the importance of data alignment and highlight ORCA’s utility for data-limited domains.
1 INTRODUCTION
The success of machine learning (ML) in vision and natural language processing (NLP) has spurred its application beyond these traditional ML domains to diverse tasks such as solving partial differential equations (Li et al., 2021b), music modeling (Lewandowski et al., 2012), detecting cardiac disease (Hong et al., 2020), and many others. However, progress in these less-explored areas can be challenging due to (1) limited amounts of labeled data, (2) high computational cost and human effort for developing models from scratch, and (3) a lack of relevant large-scale pretrained models, which have in many cases obviated the first two issues in vision and NLP (e.g., Devlin et al., 2019; Carion et al., 2020; Dosovitskiy et al., 2021; Liu et al., 2021b; Radford et al., 2021).
There are two common approaches for practitioners to handle these issues: automated machine learning (AutoML) techniques (e.g., Roberts et al., 2021; Shen et al., 2022) that focus on designing task-specific networks in a data-efficient manner; and multimodal general-purpose methods that either propose flexible architectures applicable to various tasks (Jaegle et al., 2022a) or expand the set of modalities for which pretrained models exist (e.g., Reed et al., 2022; Lu et al., 2022a). However, both classes of approaches require training from scratch when applied to a new modality and proceed under the assumption of a lack of relevant pretrained models for these diverse problems.
In this work, we re-examine this assumption by considering the general problem of cross-modal transfer. Our goal is to exploit existing large-scale pretrained models in data-rich modalities for solving diverse downstream tasks. A few recent works have demonstrated the potential promise of cross-modal transfer by applying language transformers to vision (Kiela et al., 2019; Dinh et al., 2022; Lu et al., 2022b), referential games (Li et al., 2020c), and reinforcement learning (Reid et al., 2022). However, many of these approaches are ad-hoc (e.g., rely on manual prompt engineering or hand-craft new architecture components to solve specific tasks), and none of them yield models competitive with those trained from scratch. We tackle both shortcomings in our work.
We introduce a general-purpose, cross-modal transfer workflow called ORCA (Optimal tRansport Cross-modal Adaptation) that yields state-of-the-art results on a wide range of non-text and nonvision problems using pretrained transformers (Figure 1). Our key insight is to align the feature
distribution of an unfamiliar, out-of-modality dataset with that of a familiar, in-modal dataset before fine-tuning. This data alignment process not only prevents distortion of pretrained weights but also enables cross-modal knowledge transfer, as we will show via extensive experiments in Section 4. Concretely, for any downstream task, we first generate an embedding network that maps the (potentially high-dimensional) inputs to sequence features. Then, we train it to minimize the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) between the feature-label distribution of the target data and data from the pretraining domain1. Finally, we fine-tune the pretrained model and the embedding network. Using OTDD allows us to relax many distributional assumptions required by traditional domain adaptation and perform data alignment using both the feature and label information of the target data. However, we show in an ablation study in Section 4.2.1 that substituting OTDD with other distance metrics, such as maximum mean discrepancy (MMD) (Gretton et al., 2012), can also aid cross-modal transfer, albeit to a lesser extent. This implies that it is the general idea of first-align-then-fine-tune that enables ORCA to obtain significantly better results than previous cross-modal learning methods that rely on vanilla fine-tuning (Lu et al., 2022b).
We evaluate ORCA on a diverse set of 13 tasks with different input dimensions (1D and 2D), prediction types (point and dense), and modalities (vision, audio, electrocardiogram, physics, protein, genomics, cosmic-ray, and music). ORCA outperforms various competitors, including task-specific hand-designed architectures, leading AutoML methods, and general-purpose models, ranking first on 10 tasks and in the top three on all tasks. We compare ORCA with existing fine-tuning techniques and confirm that effective cross-modal transfer is only enabled by ORCA’s feature alignment process. We further reveal an empirical correlation between the alignment quality and the downstream performance. Finally, we demonstrate ORCA’s efficacy for limited-data tasks. Overall, our work not only explores the cross-modal transfer ability of pretrained models, but also establishes a practical workflow for solving diverse prediction problems efficiently and automatically.
2 RELATED WORK
In this section, we review several groups of related work in the areas of AutoML, in-modal transfer learning (unimodal domain adaptation, unimodal/multimodal fine-tuning, and general purpose methods), and cross-modal transfer learning (heterogeneous domain adaptation, task-specific finetuning, and FPT). Table 1 summarizes these groups along relevant axes, and contrasts them to ORCA.
AutoML for diverse tasks is a growing research area, as evidenced by the NAS-Bench-360 benchmark (Tu et al., 2022), along with several recent neural architecture search (NAS) methods that target this problem, e.g., AutoML-Zero (Real et al., 2020), XD (Roberts et al., 2021), and DASH (Shen et al., 2022)). In contrast to these NAS methods, ORCA takes a transfer learning approach in order to leverage existing pretrained models from data-rich modalities for more esoteric tasks, rather than repeatedly incurring the overhead of designing new architectures and training them from scratch. That said, given the shared underlying motivation, our experimental evaluation makes use of the diverse
1We do not assume access to the pretraining data due to practical concerns about data access and computational efficiency. We instead work with publicly available proxy data from the pretraining modality, e.g., CIFAR-10 for models pretrained on ImageNet and CoNLL-2003 for models pretrained on larger text corpora.
tasks comprising NAS-Bench-360, and compares ORCA with its expert and AutoML baselines. We also compare against DASH, the state-of-the-art method on this benchmark.
Unimodal domain adaptation (DA) is a form of transductive transfer learning where the source and target tasks are the same but the domains differ (Pan & Yang, 2009; Wang & Deng, 2018). Many DA methods assume that the target data has the same input space and support as the source data, and are concerned with problems where the output spaces and the joint/marginal distributions differ, such as covariate and label shifts. Recent work considers more general settings such as different feature spaces (heterogeneous DA) or label spaces (universal DA). Our focus on cross-modal transfer goes one step further to the case where neither the input-space nor the output-space support overlaps.
Unimodal fine-tuning is a more flexible transfer approach that can be applied to downstream tasks with different label spaces or input spaces. Pretrained models are used for in-modality fine-tuning in NLP (e.g., Aghajanyan et al., 2021; Jiang et al., 2020), vision (e.g., Wei et al., 2022; Li et al., 2022), speech (e.g., Chen et al., 2022; Jiang et al., 2021), protein sequences (Jumper et al., 2021), and robotics (Ahn et al., 2022). Adapter networks (He et al., 2022) have been developed to improve the downstream performance of in-modality transfer. Multimodal fine-tuning expands the applicable modalities of a single pretrained model by learning embeddings of several data-rich modalities together (e.g., Lu et al., 2019; Radford et al., 2021; Hu & Singh, 2021; Kim et al., 2021; Alayrac et al., 2022). However, these approaches still focus on solving in-modality downstream tasks.
General-purpose models propose flexible architectures applicable to various tasks such as optical flow, point clouds, and reinforcement learning (Jaegle et al., 2021; 2022a; Reed et al., 2022). These approaches train multitask transformers from scratch using a large body of data from different tasks. Though more versatile than unimodal models, they still focus on transferring to problems within the pretraining modalities considered. Nonetheless, the success of transformers for in-modality finetuning motivates us to focus on adapting transformer-type architectures for cross-modal transfer.
Heterogeneous DA (HDA) considers nonequivalent feature spaces between the source and target domains. While most HDA methods are developed for same-modality-different-dimension transfer, e.g., between images of different resolutions, there are indeed a few works studying cross-modal tasks such as text-to-image (Yao et al., 2019; Li et al., 2020b). However, a crucial assumption that HDA makes is that the target and source tasks are the same. Thus, we operate in a much more flexible setting and consider knowledge transfer between drastically different domains with distinct tasks and label sets, such as applying Swin Transformers (Liu et al., 2021c) to solving partial differential equations or RoBERTa to classifying satellite images and electrocardiograms.
Cross-modal, task-specific fine-tuning is a recent line of research, with most work focusing on transferring NLP models to other modalities like vision (Kiela et al., 2019), referential games (Li et al., 2020c), and reinforcement learning (Reid et al., 2022). These works provide initial evidence of the cross-modal transfer capacity of pretrained models. However, they focus on hand-tailoring to a single modality, e.g., by adding ad-hoc encoders that transform agent messages (Li et al., 2020c) or decision trajectories (Reid et al., 2022) into tokens. Even when not relying on fine-tuning, work like LIFT (Dinh et al., 2022) that attempts cross-modal learning via prompting (Liu et al., 2021a) still require ad-hoc conversion of tasks to natural text.
Frozen Pretrained Transformers (FPT) (Lu et al., 2022b) is a general cross-modal fine-tuning workflow that transforms input features to be compatible with the pretrained models. Although FPT and ORCA are both general-purpose workflows, FPT does not account for differences between the target and pretraining modalities, which we show is necessary to achieve accurate predictive models and outperform existing baselines.
3 ORCA WORKFLOW
In this section, we first formalize the problem setup and then introduce the ORCA workflow for adapting pretrained transformers to diverse end tasks.
Problem Setup. A domain D consists of a feature space X , a label space Y , and a joint probability distribution P (X ,Y). In the cross-modal setting we study, the target (end-task) domain Dt and source (pretraining) domain Ds differ not only in the feature space but also the label space and by extension have differing probability distributions, i.e., X t ̸= X s, Yt ̸= Ys, and P t(X t,Yt) ̸= P s(X s,Ys). This is in contrast to the transductive transfer learning setting addressed by domain adaptation, where source and target domains share the label space and end task (Pan & Yang, 2009).
Given target data {xti, yti}i∈[nt] sampled from a joint distribution P t in domain Dt, our goal is to learn a model mt that correctly maps each input xt to its label yt. We are interested in achieving this using pretrained transformers. Thus, we assume access to a model ms that has been trained with data {xsi , ysi }i∈[ns] in the source domain Ds, where (xsi , ysi ) ∼ P s. Then, given a predefined loss function l, we aim to develop mt based on ms such that L(mt) = E(xt,yt)∼P t [l(mt(xt), yt)] is minimized. This problem formulation does not define modality explicitly and includes both inmodal and cross-modal transfer. Given the generality of the tasks we wish to explore, it is hard to provide a precise mathematical definition, so we rely on semantics to differentiate the two settings: intuitively, cross-modal domains (e.g., natural images vs. protein sequences) are more distinct to each other than in-modal domains (e.g., photos taken in two different geographical locations).
Having defined the learning problem, we now present our three-stage cross-modal transfer workflow: (1) architecture design to support diverse input-output dimensions, (2) embedder pretraining to align the source and target feature distributions, and (3) fine-tuning to minimize the target task loss.
3.1 TASK-SPECIFIC ARCHITECTURE DESIGN
Applying pretrained models to another downstream problem usually requires addressing the problem of mismatched dimensions. To make ORCA work for input and output tensors of different dimensions, we decompose a transformer-based learner m into three parts (Figure 1 stage 1): an embedder f that transforms input x into a sequence of features, a model body g that applies a pretrained transformer (i.e., series of attention layers) to the embedded features, and a predictor h that generates predictions with the desired output shape. ORCA uses pretrained architecture and weights to initialize the model body g but replaces f and h with layers designed to match the target data with the pretrained model’s embedding dimension. Next, we describe each module in detail.
Custom Embedding Network. Denote the feature space compatible with the pretrained model body as Ẋ . For a transformer with maximum sequence length S and embedding dimension D, Ẋ = RS×D. The embedding network f : X → Ẋ is designed to take in a tensor of arbitrary dimension from X and transform it to the feature space Ẋ . In ORCA, f is composed of a convolutional layer with input channel cin, output channel cout, kernel size k, and stride k, generalizing the patching operations used in vision transformers to 1D and higher-dimensional cases. We set cin to the input channel of x and cout to the embedding dimension D. To take full advantage of the representation power of the pretrained model, we choose the smallest k for which the product of output shape excluding the channel dimension ≤ S. That is, when we flatten the non-channel dimensions of the output tensors after the convolution, pad and then transpose it, we can obtain sequence features with shape S ×D. Finally, we add a layer norm and a positional embedding to obtain ẋ.2
Pretrained Transformer Body. The model body g takes the embedding ẋ ∈ Ẋ as input and outputs features ẏ ∈ Ẏ; the dot is used to differentiate these intermediate representations from the raw inputs and labels. For transformer-based g, both the input and output feature spaces Ẋ , Ẏ are RS×D.
Custom Prediction Head. Finally, the prediction head h must take ẏ ∈ Ẏ as input and return a task-dependent output tensor. Different tasks often specify different types of outputs, e.g., classification tasks require logits in RK where K is the number of classes, and dense prediction tasks require dense maps with the same spatial dimension as the input and per index logits correspond-
2As a concrete example, consider an image tensor with shape (Cin, Hin,Win). We first choose stride k for the convolution such that Hout × Wout ≈ S to get an output tensor with shape (D,Hout,Wout). Then, we flatten it to shape (D,Hout ×Wout), pad along the last dimension to shape (D,S), and transpose.
ing to K classes. Thus, it is crucial to define task-specific output modules and fine-tune them when transferring to new tasks. In our workflow, we use the simplest possible instantiation of the predictor modules. For classification, we apply average pooling along the sequence length dimension (or take the classification token of language models) to obtain 1D tensors with length D and then use a linear layer that maps D to K. For dense prediction, we apply a linear layer to the sequence outputs so the resulting tensor has shape (S, kdim(Y)−1K), where kdim(Y)−1 is the downsampling factor of the embedder convolution kernel with stride k. This upsamples by the same factor that the embedder convolution downsampled. Then, we can mold the tensor to the desired output dimension.3
With an architecture based on the pretrained model but is also compatible with our target task, we can now turn our attention to pretraining the embedder via matching source and target distributions.
3.2 EMBEDDING LEARNING DOR DATA ALIGNMENT
Intuitively, transferring knowledge across similar modalities should be easier than across distant ones. Hence, given a target task in a new modality, we aim to manipulate the task data so that they become closer to the pretraining modality. We use the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) to measure the closeness between datasets in different domains. Unlike the classic OT distance which operates only on the feature distributions, OTDD considers both the feature and the label information and can work even if the label sets are unrelated or disjoint. Thus, while OT is mostly used for unsupervised or semi-supervised domain adaptation (Courty et al., 2017; Yan et al., 2018), OTDD is particularly suitable for distance estimation between cross-modal labeled datasets. In the following, we will briefly explain how ORCA uses OTDD. We refer the readers to Alvarez-Melis & Fusi (2020) for a detailed exposition on the metric itself.
Formally, let fs : X s → Ẋ denote the pretrained embedder (the part of ms that transforms the source data to sequence features) and f t : X t → Ẋ be a randomly initialized target embedder with architecture discussed in the previous section. We train f t to minimize the expected OTDD between the embedding-label distributions ( f t(xt), yt ) and ( fs(xs), ys ) . That is, for both datasets,
we first represent each class label as a distribution over the in-class features: y 7→ P (Ẋ |Y = y). This transforms the source and target label sets into the shared space of distributions over Ẋ . Then, we can define the distance dY(yt, ys) between different labels using the p-Wasserstein distance associated with a metric dẊ over the feature space, e.g., the l2 distance ∥ẋt − ẋs∥22. This allows us to measure the difference between distributions in Ẋ ×Y using the following p-Wasserstein metric:
dẊ×Y ( (ẋt, yt), (ẋs, ys) ) = ( dẊ (ẋ t, ẋs)p + dY(y t, ys)p )1/p . (1)
Plugging this into the OT formulation leads to the OTDD over Ẋ ×Y , which we optimize to learn f t. Leveraging the clustering structure of the datasets, OTDD provides a better distance estimation between the target and source data, demonstrating better alignment ability in practice. In Section 4.2.1, we examine several distance metrics for learning the embedder. While data alignment generally improves downstream performance for all metrics, OTDD leads to the best empirical results.
As for the computational cost of embedding learning, we analyze the complexity of OTDD in Appendix A.1 and show that this stage takes much less time than the later fine-tuning stage in Appendix A.5. Thus, ORCA achieves a significant performance gain at a low cost for feature alignment.
3.3 FINE-TUNING FOR DOWNSTREAM ADAPTATION
After training the embedder, we perform full fine-tuning by updating all model parameters to minimize the target loss. This step further aligns the embedder and predictor with the pretrained model to improve downstream performance. We perform an ablation study comparing ORCA to standard fine-tuning without feature matching in Section 4.2.1 and show that our approach improves prediction accuracy and reduces performance variance. There are orthogonal lines of work that study how to best fine-tune a pretrained model (e.g., Liu et al., 2022; He et al., 2022). We compare with one strategy used in FPT (Lu et al., 2022b) in Section 4.2.2 but leave further exploration for future work.
3As a concrete example, for an image tensor with embedding convolution kernel size k, the linear layer will yield an output of shape (S, k2K), which we transpose, pad, and reshape to (k2K,Hout,Wout). Finally, we apply pixelshuffle (Shi et al., 2016) to get an output of shape (K,Hin,Win).
4 EXPERIMENTS
Having introduced how ORCA addresses the dimension mismatch between the target and source datasets via architecture design and tackles the distribution mismatch via embedder learning, we now proceed with showing its empirical effectiveness. In the following, we will demonstrate that ORCA is the first approach that allows practitioners to obtain models better than hand-designed, AutoMLsearched, and general-purpose architectures on a variety of diverse tasks. Then, we will analyze key components of ORCA to better understand the mechanism underlying cross-modal transfer.
Experiment Protocol. While our workflow accepts a wide range of pretrained transformers as model bodies, we use RoBERTa (Liu et al., 2019b) and Swin Transformers (Liu et al., 2021c), which are representatives of the most studied language and vision modalities, to exemplify ORCA’s efficacy. We implement the base models, which have around 100 million parameters, and use the pretrained weights available in the Hugging Face transformers library (Wolf et al., 2019). As stated in the introduction, we do not use the exact pretraining data to represent the source modalities because they are often not publicly available and can be too large to compute OTDD efficiently. We use the proxy datasets: CoNLL-20034 for RoBERTa and CIFAR-10 for Swin.
For each task, we first apply the hyperparameter tuning algorithm ASHA (Li et al., 2020a) to the standard fine-tuning baseline (“Fine-tuning” in Table 3) to identify suitable batch size, optimizer, learning rate, and weight decay. These hyperparameters are then applied to all fine-tuning baselines as well as ORCA. During embedder learning, while classification tasks naturally come with discrete labels required for computing OTDD, for dense prediction tasks where labels are high-dimensional maps, we perform clustering on the dense maps to generate pseudo labels, which not only preserves the intrinsic distribution of the target data but also speeds up OTDD computation. We manage our experiments using the Determined AI platform. All experiments are performed on NVIDIA V100 GPUs and results are averaged over 5 random seeds. For other experiment details, see Appendix A.3.
4.1 CAN PRETRAINED MODELS TRANSFER ACROSS MODALITY TO SOLVE DIVERSE TASKS?
In this section, we highlight the most important observation of this work: cross-modal fine-tuning with ORCA can solve a variety of tasks effectively and efficiently. To demonstrate this, we evaluate ORCA on 13 tasks detailed below. We first include 10 tasks from NAS-Bench-3605, which covers problems such as PDE solving, protein folding, and cardiac disease detection. This benchmark contains tasks for 1D and 2D classification, 2D dense prediction, but not 1D dense prediction, so we added JSB Chorales, a music modeling dataset widely used for evaluating recurrent networks (Chung et al., 2017; Bai et al., 2018). We also added ListOps (parsing math expressions) (Tay et al., 2021) and Homology (classifying protein structure) (Rao et al., 2019) for comparison with FPT. Together, these 13 tasks represent a wide collection of modalities for comprehensive evaluation.
Following the taxonomy in Table 1, we consider three classes of baselines: (1) hand-designed expert architectures for each task, as identified by Tu et al. (2022), Rao et al. (2019), and Tay et al. (2021); (2) generalpurpose models, as represented by Perceiver IO (Jaegle et al., 2022b); and (3) AutoML baselines, as represented by those evaluated in NAS-Bench-360 and DASH (Shen et al., 2022). We will compare with FPT later, the only remaining approach with a general workflow from Table 1.
In Table 2, we report the prediction error for each method on each task. ORCA achieves the lowest error rate on 10 of 13 tasks and is the most effective in terms of aggregated performance. This is also supported by the performance summary in Figure 2. More specifically, we outperform all hand-designed architectures on all tasks except ECG, where we rank second but do much better than the other
4CoNLL-2003 is for named entity recognition. It is used to interpret language models (Jawahar et al., 2019). 5NAS-Bench-360 is designed for testing how well ML algorithms can generalize and is a core component
of the 2022 AutoML Decathlon competition. For a summary of included tasks, see Table 6 in the Appendix
methods. We also beat all AutoML baselines on all tasks except DeepSEA and NinaPro, where ORCA is second and third, respectively. The improvements from ORCA come at a small computational overhead associated with pretraining the embedder to match the source and target modalities. Table 5 in the Appendix shows the time needed for embedder learning with OTDD, which is a small portion (10.2% on average) of the fine-tuning time. ORCA’s efficiency and its state-of-the-art results on 10 tasks make it a practical tool for model development in diverse areas.
Our experiments further validate the findings in Lu et al. (2021) that pretrained transformers can learn knowledge transferable to seemingly unrelated tasks. In the following, we delve into the mechanism of ORCA to provide intuition for necessary components of successful cross-modal learning.
4.2 KEY FACTORS FOR SUCCESSFUL CROSS-MODAL TRANSFER
Here, we dissect the success of cross-modal transfer with ORCA through a series of ablation studies. As a preview, we identify three aspects to be key to its success: data alignment through embedding learning, full fine-tuning of all model weights, and suitable pretrained model selection.
4.2.1 MATCHING FEATURE DISTRIBUTIONS HELPS ADAPTATION
Table 2 shows that ORCA leads to effective transfer using the proposed three-stage workflow. However, as learning the embedder via OTDD is an instantiation of the general first-align-then-fine-tune paradigm, we ask the question: does cross-modal transfer work because of the specific application of OTDD or the core approach of data alignment across modalities?
To answer this question, we perform an ablation study on the embedding learning metrics and compare their performance to fine-tuning without embedder pretraining. We experiment with OTDD, maximum mean discrepancy (MMD) (Gretton et al., 2012), the mutual-informationbased TransRate (Huang et al., 2022), and Euclidean distance. The performance profile is in shown Figure 3, and the detailed results are shown in Appendix A.4. We highlight the following observations. First, bringing the target modality closer to the pretraining modality generally aids cross-modal transfer, regardless of which metric we minimize. This is evident from the fact that pretraining the embedder with any of the metrics can outperform vanilla fine-tuning without embedder learning on many
tasks. Second, among the evaluated metrics, OTDD leads to the best overall performance. This is why we use it in our workflow. The middle rows of Table 3 demonstrate that ORCA with OTDD consistently outperforms naive fine-tuning. This supports our argument that closing the gap between a new modality and the pretraining modality can facilitate a model’s adaptation to a new task.
To further isolate the impact of data alignment, we compare ORCA with a train-from-scratch baseline, which trains RoBERTa and Swin using only the target data for the same number of epochs. Table 3 shows that train-from-scratch is better than fine-tuning but worse than ORCA on many tasks like Satellite and DeepSEA, indicating that when the target modality differs significantly from the pretraining modality, naive fine-tuning may harm transfer, but aligning the feature distribution using ORCA can resolve this issue and benefit transfer. Indeed, recent work has shown that optimizing directly for the task loss may distort the pretrained weights and lead to suboptimal solutions (Kumar et al., 2022; Lee et al., 2022). By manipulating the target data distribution to look like the source distribution, we can lower the risk of catastrophic forgetting, which may explain the success of ORCA.
Lastly, we perform experiments to quantify the effect of data alignment from a task-wise perspective. We train the embedder for different number of epochs before fine-tuning to see how optimizing OTDD to various levels of convergence affects downstream performance. Figure 4 plots the finetuning accuracy along with the final OTDD objective for different levels of embedder pretraining. Evidently, as the dataset distance decreases, the final fine-tuning accuracy increases. This correlation supports the effectiveness of embedder learning for cross-modal transfer. In addition, we observe that learning the embedder prior to fine-tuning can stabilize training, as the performance variance of ORCA is consistently lower than that of standard fine-tuning.
4.2.2 FINE-TUNING ALL MODEL PARAMETERS FOR CROSS-MODAL TASKS
As discussed in Section 2, Frozen Pretrained Transformers (FPT) (Lu et al., 2022b) is a related work that showed pretrained language models contain knowledge relevant for out-of-modality tasks. While FPT presented a general pipeline that transfers GPT-2 to tasks like CIFAR-10, Homology, and ListOps, the resulting models were not as good as those directly trained on the target data. FPT differs from ORCA in that (1) it does not pretrain an embedder for task-specific adaptation and (2) it only fine-tunes the layer norms. We have already verified the importance of (1). Now, to isolate the impact of (2), we evaluate ORCA with fine-tuning the layer norms vs. FPT on our task set.
The bottom rows of Table 3 show ORCA with fine-tuning just the layer norms outperforms FPT, indicating pretraining the embedding layers boosts the cross-modal performance of FPT. However, this performance gain is smaller than that seen in the full fine-tuning setting, which implies that full fine-tuning can take better advantage of the learned embeddings. Also, partial fine-tuning is less effective than full fine-tuning on all tasks except for DeepSEA. This exception might be due to the fact that full fine-tuning without learned embeddings is more prone to overfitting. In terms of runtime, FPT only results in less than 2x speedups compared with full fine-tuning (see Appendix A.5), despite the fact that we are updating significantly fewer parameters. This is unsurprising since gradients are still back-propagated through the entire network. Therefore, when computation allows, we recommend using ORCA with full fine-tuning for better downstream performance.
4.2.3 PRETRAINING MODALITY CAN AFFECT TRANSFER PERFORMANCE
Finally, we study how the pretraining modality affects fine-tuning performance. For experiments in Table 2, we chose pretrained models for each task based on the input dimension, i.e., we use RoBERTa for all 1D tasks and Swin for all 2D tasks. Now, we can switch the model bodies and apply ORCA. This is easy to implement because ORCA is model-agnostic and the embedder architec-
ture handles all necessary input transformation to obtain sequence features. As shown in Table 4, fine-tuned RoBERTa outperforms fine-tuned Swin on the 1D task, and the final OTDD objective for RoBERTa is also smaller than that of Swin. We hypothesize that this is because the considered DeepSEA data (genomics sequences) are structured more like language than images with discrete units of information and general grammatical rules. The FPT paper observes a similar trend for Homology. As for the 2D tasks, we again notice that models with better fine-tuning accuracy have smaller OTDDs. This suggests a way of selecting pretrained models from a predefined model hub for each task, e.g., by comparing the optimized OTDDs and picking the one with the smallest value.
Case Study: Low-Data Regime. Now that we have a better understanding of ORCA, recall that one of our motivations for transferring pretrained models to various modalities is to help task-solving in data-limited regimes, where training models from scratch can be challenging. To this end, we investigate whether ORCA can facilitate fine-tuning large-scale models on small target datasets.
Indeed, for vanilla fine-tuning, a small amount of data may not give enough signal to update the pretrained weights. However, it is possible to obtain a good feature embedder with the same amount of data using ORCA, which can then make fine-tuning easier. In Figure 5, we vary the amount of target data and plot the performance of ORCA and vanilla fine-tuning. The performance gain of ORCA increases as the amount of data used decreases. This shows that fine-tuning does suffer from limited data, but ORCA can considerably alleviate the problem and improve downstream performance. Moreover, ORCA allows us to use a third of the data to match the performance of standard fine-tuning. Thus, it can benefit model development in domains where data collection is costly.
Discussion and Future Work. We identify several future directions based on our experiment results. First, it is worth studying the effect of pretraining modality further and develop a systematic way of selecting pretrained models. Then, we can incorporate model selection into ORCA for a more automated transfer pipeline. Second, while ORCA leverages the simplest fine-tuning paradigm, we believe it is possible to combine it with more sophisticated transfer techniques such as adapters (He et al., 2022). We briefly study how prompting (Bahng et al., 2022; Jia et al., 2022) can be applied to diverse tasks in Appendix A.6 and find that it is in general less effective for out-of-modality problems, so we can possibly boost its performance using ORCA. Lastly, we currently evaluate ORCA on diverse 1D/2D tasks and in-modality vision tasks (Appendix A.7). It is also important to validate it on more settings, such as high-dimensional problems and reinforcement learning (Reid et al., 2022).
5 CONCLUSION
In this paper, we argue that an important step towards developing more general ML methods is to study how we can reuse existing models effectively for new and less-explored tasks. To this end, we propose a novel framework that allows transferring pretrained transformers to distinct downstream modalities. Our method, ORCA, can map target data from an arbitrary end task’s modality to a model’s pretraining modality to improve fine-tuning performance. We believe that this work not only signals the potential of large-scale pretraining for diverse tasks but also lays out a path for a largely uncharted data-centric paradigm in machine learning.
A APPENDIX
A.1 EMBEDDING LEARNING WITH OPTIMAL TRANSPORT DATASET DISTANCE
A.1.1 LITERATURE REVIEW
Due to the limited space, we do not give a full review of the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) in the main text. Here, we briefly recall the optimal transport (OT) distance and explain OTDD in detail.
Consider a complete and separable metric space X and let P(X ) be the set of probability measures on X . For α, β ∈ P(X ), let Π(α, β) be the set of joint probability distributions on X × X with marginals α and β in the first and second dimensions respectively. Then given a cost function c(·, ·) : X × X → R+, the classic OT distance with cost c is defined by:
OTc(α, β) := min π∈Π(α,β) ∫ X×X c(x, y)dπ(x, y). (2)
When X is equipped with a metric dX , we can use c(x, y) = dX(x, y)p for some p ≥ 1 and obtain the p-Wasserstein distance, Wp(α, β) := (OTdpX (α, β)) 1 p .
Now consider the case of finite datasets with features in X and labels in a finite set Y . Each dataset can be considered a discrete distribution in P(X × Y). To define a distance between datasets, a natural approach is to define an appropriate cost function on Z := X × Y and consider the optimal transport distance. Indeed, for any metric dY on Y and any p ≥ 1, Z can be made a complete and separable metric space with metric
dZ((x, y), (x ′, y′)) = (dX (x, x ′)p + dY(y, y ′)p) 1 p (3)
It is usually not clear how to define a natural distance metric in Y , so instead we proceed by representing each class y ∈ Y by P (X|Y = y), the conditional distribution of features X given Y = y. More specifically, for a dataset D ∈ P(X × Y), denote this map from classes to conditional distributions by F (D, ·) : Y → P(X ). Then we can transform any dataset over X × Y into one over X × P(X ) via G(D) := (projX , F (D, projY )). As discussed above, Wp is a natural notion of distance in P(X ), so by substituting Y 7→ P(X ) and dY 7→ Wp in Equation 3, we can define the (p-)optimal transport dataset distance between datasets DA and DB by
OTDD(DA,DB) := OT (dpX×W p p ) 1 p (G(DA), G(DB)) (4)
A.1.2 COMPUTATIONAL CONSIDERATIONS
As we aim for a practical fine-tuning workflow, computational cost is a crucial concern. While Alvarez-Melis & Fusi (2020) proposed two variants of OTDD—the exact one and a Gaussian approximation, we observe from our experiments that optimizing the exact OTDD leads to better performance. In the following, we will focus on analyzing the computational cost of the exact OTDD.
Given datasets with D-dimensional feature vectors, estimating vanilla OT distances can be computationally expensive and has a worst-case complexity of O(D3 logD) (Pele & Werman, 2009). However, adding an entropy regularization term ϵH(π|α⊗β) to Equation 2, where H is the relative entropy and ϵ controls the time-accuracy trade-off, can be solved efficiently with the Sinkhorn algorithm (Cuturi, 2013). This reduces OT’s empirical complexity to O(D2) and makes the time cost for computing OTDD manageable for ORCA’s workflow.
During implementation of ORCA, we also observed memory issues for computing OTDD using the entire target and source datasets on GPUs. To alleviate this, we propose a class-wise subsampling strategy for approximating OTDD on GPUs (Algorithm 1). In short, we split the K-class target dataset into K datasets based on the labels and compute the class-wise OTDD between each singleclass target dataset and the entire source dataset. Each class-wise OTDD can be approximated with the average of batch samples similar to how stochastic gradient descent approximates gradient descent. After that, we approximate the OTDD between the target and source datasets using the
Algorithm 1 Efficient approximation of OTDD using class-wise subsampling.
Input: target dataset {xt, yt}, number of target classes Kt, source dataset S = {xs, ys}, subsample size b, subsample round R for each class i ∈ [Kt] in the target dataset do
weighted sum of the K class-wise OTDDs. To verify that the approximation works empirically, we track the approximated OTDD (computed on GPUs) and the actual OTDD (computed on CPUs) and visualize the loss curves during ORCA’s embedder learning process (Figure 6). We can see that the estimated value adheres to the actual value.
Leveraging both the Sinkhorn algorithm and class-wise approximation, the embedder learning process only takes up a small fraction of the total fine-tuning time in practice, as shown in Table 5. Hence, we invest a reasonable time budget but achieve significantly improved cross-domain transfer performance using ORCA.
A.2 INFORMATION ABOUT EVALUATION TASKS
A.3 EXPERIMENT DETAILS
Below, we summarize details for implementing ORCA and evaluating it on the selected 13 tasks. The code and configuration file for reproducing each experiment can be found in the supplementary material. We will also release ORCA’s best checkpoint for each task later.
A.3.1 PRETRAINED MODELS
We evaluated ORCA with two pretrained models in our experiments. In Table 2, for all 2D tasks including CIFAR-100, Spherical, Darcy Flow, PSICOV, Cosmic, NinaPro, and FSD50K, we use the following model. As Swin has a pretrained resolution, we reshape the inputs for our tasks to the resolution before feeding them into the model.
Name Pretrain Resolution Num Params FLOPS FPS
Swin-base (Liu et al., 2021c) ImageNet-22K 224×224 88M 15.4G 278
For all 1D tasks including ECG, Satellite, DeepSEA, JSB Chorales, ListOps,and Homology, we use the following model:
Name Pretrain Num Params FLOPS
RoBERTa-base (Liu et al., 2019b) Five English-language corpora 125M 1.64E20
We use the Hugging Face transformers library Wolf et al. (2019) to implement the pretrained models.
A.3.2 TASK DATA PREPARATION
For all the NAS-Bench-360 tasks, each dataset is preprocessed and split using the script available on https://github.com/rtu715/NAS-Bench-360, with the training set being used for hyperparameter tuning, embedding learning, and fine-tuning. We obtain the data processing script for JSB data from https://github.com/locuslab/TCN, for ListOps from https://github.com/kzl/universal-computation, and for Homology from https://github.com/songlab-cal/tape.
A.3.3 HYPERPARAMETER TUNING
As ORCA is both task-agnostic and model-agnostic, it can be applied to fine-tuning a variety of pretrained transformers on drastically different end tasks with distinct datasets. Hence, it is hard to define one set of fine-tuning hyperparameters for all (model, task) pairs. At the same time, optimizing large-scale pretrained transformers can be challenging due to their large model sizes, as the downstream performance depends largely on the hyperparameters used. For instance, using a large learning rate can distort pretrained weights and lead to catastrophic forgetting. Therefore, in our experiments, given a (model, task) pair, we first apply hyperparameter tuning using the Asynchronous Successive Halving Algorithm (ASHA) (Li et al., 2020a) to the standard fine-tuning setting (i.e., after initializing the embedder and predictor architectures, directly updating all model weights to minimize the task loss) to identify a proper training configuration. Then, we use the same set of hyperparameters found for all our experiments for the particular (model, task) combination. Note that even though we did not explicitly state this in the main text, the hyperparameter tuning stage can be directly integrated into the ORCA workflow between stage 1 and stage 2. In this sense, ORCA is still an automated cross-modal transfer workflow that works for diverse tasks and different pretrained models.
The configuration space for ASHA is as follows:
• Batch size: 32, 128, 512, 1024 for Swin; 16, 56, 256, 512 for RoBERTa
• Optimizer: SGD, Adam, AdamW
• Learning rate: 1E-2, 1E-3, 1E-4, 1E-5, 1E-6
• Weight decay: 1E-2, 1E-3, 1E-4, 1E-5, 1E-6
Note that to fit each experiment on a single GPU, we set a fixed batch size (32 for Swin and 16 for Roberta) and vary the gradient accumulation step instead of actually varying the batch size, but the effect is the same.
A.3.4 ORCA ONLY: EMBEDDING LEARNING WITH OTDD
After initializing the embedder architecture for each task, we train it to minimize the OTDD between the embedded target features and embedded source features.
For source datasets, we use CIFAR-10 for Swin and CONLL-2003 for RoBERTa. We sample 5000 data points to compute OTDD. In practice, we can pass the source data through the pretrained embedder once and save all the embedded features, so we don’t have to pay the cost of obtaining the source features each time we fine-tune a new model.
For classification tasks, we directly use the labels provided by the end task to compute OTDD. For dense tasks, we perform K-Means clustering on the target data to obtain pseudolabels for OTDD computation. The number of clusters is set to the number of classes of the source dataset, e.g., 10 for 2D tasks that use CIFAR-10 as the source dataset.
To compute the embedding learning objective, we use the OTDD implementation of the original paper provided here: https://github.com/microsoft/otdd. As for the hyperparameters, we use the batch size, learning rate, optimizer, and weight decay obtained from A.3.3. The others are fixed across different tasks:
• Embedding learning epochs: 60
• Learning rate scheduler: decay by 0.2 every 20 epochs
A.3.5 FINE-TUNING
Besides the searched hyperparameters, we also fix the following hyperparameters for fine-tuning.
• Fine-tuning epochs: 100 for Swin tasks, 60 for RoBERTa tasks • Learning rate scheduler: we use the linear decay with min lr = 0 and 5 warmup epochs
A.3.6 TRAIN-FROM-SCRATCH
This baseline is trained using the same hyperparameter configuration (number of epochs, batch size, learning rate, etc) as the fine-tuning baseline.
A.3.7 EVALUATION
When training/fine-tuning is finished, we evaluate the performance of all models following the NASBench-360 protocol. We first report results of the target metric for each task by running the model of the last epoch on the test data. Then, we report aggregate results via performance profiles (Dolan & Moré, 2002), a technique that considers both outliers and small performance differences to compare methods across multiple tasks robustly. In such plots, each curve represents one method. The τ on the x-axis denotes the fraction of tasks on which a method is no worse than a τ -factor from the best. The performance profile for our experiments is shown in Figure 2.
A.4 ABLATION STUDY ON EMBEDDING LEARNING METRICS
As motivated in Section 4.2.1, we present here an ablation study on the embedding learning metrics that we have considered for minimizing distribution dissimilarity. The results show that (1) performing feature alignment generally helps downstream adaptation, regardless of which metric we minimize; (2) OTDD leads to the best overall performance, so we chose it for our workflow. Our findings confirm that it is the general idea of data alignment, rather than a specific metric, that makes cross-modal transfer work.
Specifically, we experiment with OTDD, maximum mean discrepancy (MMD) (Gretton et al., 2012), the mutual-information-based TransRate (Huang et al., 2022), and (pairwise) Euclidean distance. We learn the embedders to minimize these metrics and then fine-tune the pretrained models. The test errors are as follows.
A.5 RUNTIME OF ORCA VS. FPT
In Table 3, we compare with the FPT setting, which only fine-tunes the layer norms of the pretrained transformer models. As we have shown already, the downstream performance of fine-tuning only a subset of the parameters is less competitive than fine-tuning all parameters. Below, we show that the time saved for updating only layer norms is also not that significant. Therefore, we suggest performing full fine-tuning when time and computational resources allow.
A.6 PROMPTING
Apart from fine-tuning, a new paradigm of working with large-scale pretrained models is prompting, i.e., we do not update the pretrained weights but only modify the input and query the model for the desired output. Existing language prompting methods (e.g., Liu et al., 2022) are generally not suitable for cross-modal learning due to the difficulty of designing natural prompts for diverse data types. For the 1D tasks we study, there is even no notion of “discrete tokens.” Another line of work studies visual prompting by modifying 2D inputs for querying vision transformers. We test two such algorithms, VP (Bahng et al., 2022) and VPT (Jia et al., 2022), on three classification tasks in our task suite. They are not applicable to the remaining tasks because either the inputs cannot be reshaped to look like images or the outputs are not classification logits.
We test VPT with the pretrained Swin-Base Transformer (the same model we used for ORCA) and VP with the pretrained ResNet-50 (as the official implementation does not support vision transformers). The results are shown in Table 9. In general, prompt tuning is less effective than fine-tuning, and the two baselines perform significantly worse than ORCA. This is not surprising given that prompting methods are more intuitively suited to in-modality transfer, where the target and the source data have similar structure or semantic meaning. However, when the target data (e.g., electromyography signals, as in the NinaPro dataset) is drastically different from image data, it is difficult to design prompts or expect good performance by only modifying the inputs without fine-tuning the pretrained models.
A.7 COMPATIBILITY WITH IN-MODALITY TRANSFER
A natural question to ask is whether ORCA can also tackle in-modality tasks. While we design ORCA to enable cross-modal transfer, we hypothesize that it should facilitate same-modality transfer if two domains have large dataset distance. To validate this, we test ORCA on DomainNet
datasets, which are commonly used to evaluate homogeneous DA methods (Peng et al., 2019). From Table 10, we can see that ORCA achieves significantly better performance than the fine-tuning baseline, which shows that the feature matching of ORCA can also help in-domain generalization. | 1. What is the main contribution of the paper, and how does it address the cross-model transfer problem?
2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness?
3. What are the weaknesses of the paper, especially regarding the use of pre-trained models?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper's experimental results and comparisons with other works? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper introduces a new approach, ORCA, to tackle the cross-model transfer problem. ORCA equips a pre-trained transformer with a task-specific embedder and a task-specific predictor. When facing a downstream task, the embedder is learned to map data of any dimension to a standard sequence of tokens that is then fed into the pre-trained transformer. The predictor is learned to map the output to the specific label space. The learning objective is an optimal transport dataset distance that is used to match the source and target data distribution. After matching, the whole network is finetuned using standard loss.
Strengths And Weaknesses
Strength: As pre-trained models become increasingly powerful, there exists a desire to apply these models to aid the learning of downstream tasks. But the mismatch of data distribution from different modalities hinders the application. This paper tackles this challenging and important problem and gives a simple but effective solution. The solution of this paper leverages the property of transformer that it has a standard input structure without any inductive bias of data, and proposes to learn an embedder that can map any input to match such structure. To better align different modalities, the authors further use optimal transport to match the source and target datasets. Although being simple, this adaptation scheme is reasonable and the authors give enough empirical evidence of that this approach is effective. The experiments show that ORCA performs generally better than SOTA methods.
Weakness: One concern is the use of pre-trained model, which serve as a potential unfair comparison with other methods. In particular, the pretrained model the author chooses is RoBERTa and SwinTransformer trained on Five English-language corpora and ImageNet-22K, respectively. This may differ from other methods which potentially use different datasets for pre-training. So can authors provide some comparisons with state-of-the-art methods trained on the same dataset using identical architecture? I understand that such large-scale experiments are difficult to reimplement, but such comparisons would be greatly appreciated and would bolster the work.
Clarity, Quality, Novelty And Reproducibility
Clarity, quality: The paper is well organized and well written. Novelty: As pre-trained models become increasingly powerful, there exists a desire to apply these models to aid the learning of downstream tasks. But the mismatch of data distribution from different modalities hinders the application. This paper tackles this challenging and important problem and gives a simple but effective solution. Reproducibility: The code and configuration file for reproducing each experiment can be found in the supplementary material. |
ICLR | Title
Tackling Diverse Tasks via Cross-Modal Transfer Learning
Abstract
Fine-tuning large-scale pretrained models has led to remarkable progress in wellstudied modalities such as vision and NLP. However, similar gains have not been observed in many other tasks due to an assumed lack of relevant pretrained models for these diverse modalities. In this work, we revisit this assumption by studying the cross-modal transfer ability of large-scale pretrained models. We introduce ORCA, a general cross-modal fine-tuning workflow that enables fast and automatic exploitation of existing pretrained models for diverse tasks. ORCA achieves taskspecific adaptation by performing data alignment before fine-tuning: it learns an embedding network that minimizes the optimal transport dataset distance between the end-task data and the pretraining data to close the modality gap. Through extensive experiments, we show that ORCA is the first viable approach that allows practitioners to use pretrained models to outperform hand-designed, AutoMLsearched, and general-purpose architectures—ORCA obtains state-of-the-art results on 10 of 13 diverse tasks we evaluate and ranks among the top three on the others. We shed light on why cross-modal transfer works by quantifying the importance of data alignment and highlight ORCA’s utility for data-limited domains.
1 INTRODUCTION
The success of machine learning (ML) in vision and natural language processing (NLP) has spurred its application beyond these traditional ML domains to diverse tasks such as solving partial differential equations (Li et al., 2021b), music modeling (Lewandowski et al., 2012), detecting cardiac disease (Hong et al., 2020), and many others. However, progress in these less-explored areas can be challenging due to (1) limited amounts of labeled data, (2) high computational cost and human effort for developing models from scratch, and (3) a lack of relevant large-scale pretrained models, which have in many cases obviated the first two issues in vision and NLP (e.g., Devlin et al., 2019; Carion et al., 2020; Dosovitskiy et al., 2021; Liu et al., 2021b; Radford et al., 2021).
There are two common approaches for practitioners to handle these issues: automated machine learning (AutoML) techniques (e.g., Roberts et al., 2021; Shen et al., 2022) that focus on designing task-specific networks in a data-efficient manner; and multimodal general-purpose methods that either propose flexible architectures applicable to various tasks (Jaegle et al., 2022a) or expand the set of modalities for which pretrained models exist (e.g., Reed et al., 2022; Lu et al., 2022a). However, both classes of approaches require training from scratch when applied to a new modality and proceed under the assumption of a lack of relevant pretrained models for these diverse problems.
In this work, we re-examine this assumption by considering the general problem of cross-modal transfer. Our goal is to exploit existing large-scale pretrained models in data-rich modalities for solving diverse downstream tasks. A few recent works have demonstrated the potential promise of cross-modal transfer by applying language transformers to vision (Kiela et al., 2019; Dinh et al., 2022; Lu et al., 2022b), referential games (Li et al., 2020c), and reinforcement learning (Reid et al., 2022). However, many of these approaches are ad-hoc (e.g., rely on manual prompt engineering or hand-craft new architecture components to solve specific tasks), and none of them yield models competitive with those trained from scratch. We tackle both shortcomings in our work.
We introduce a general-purpose, cross-modal transfer workflow called ORCA (Optimal tRansport Cross-modal Adaptation) that yields state-of-the-art results on a wide range of non-text and nonvision problems using pretrained transformers (Figure 1). Our key insight is to align the feature
distribution of an unfamiliar, out-of-modality dataset with that of a familiar, in-modal dataset before fine-tuning. This data alignment process not only prevents distortion of pretrained weights but also enables cross-modal knowledge transfer, as we will show via extensive experiments in Section 4. Concretely, for any downstream task, we first generate an embedding network that maps the (potentially high-dimensional) inputs to sequence features. Then, we train it to minimize the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) between the feature-label distribution of the target data and data from the pretraining domain1. Finally, we fine-tune the pretrained model and the embedding network. Using OTDD allows us to relax many distributional assumptions required by traditional domain adaptation and perform data alignment using both the feature and label information of the target data. However, we show in an ablation study in Section 4.2.1 that substituting OTDD with other distance metrics, such as maximum mean discrepancy (MMD) (Gretton et al., 2012), can also aid cross-modal transfer, albeit to a lesser extent. This implies that it is the general idea of first-align-then-fine-tune that enables ORCA to obtain significantly better results than previous cross-modal learning methods that rely on vanilla fine-tuning (Lu et al., 2022b).
We evaluate ORCA on a diverse set of 13 tasks with different input dimensions (1D and 2D), prediction types (point and dense), and modalities (vision, audio, electrocardiogram, physics, protein, genomics, cosmic-ray, and music). ORCA outperforms various competitors, including task-specific hand-designed architectures, leading AutoML methods, and general-purpose models, ranking first on 10 tasks and in the top three on all tasks. We compare ORCA with existing fine-tuning techniques and confirm that effective cross-modal transfer is only enabled by ORCA’s feature alignment process. We further reveal an empirical correlation between the alignment quality and the downstream performance. Finally, we demonstrate ORCA’s efficacy for limited-data tasks. Overall, our work not only explores the cross-modal transfer ability of pretrained models, but also establishes a practical workflow for solving diverse prediction problems efficiently and automatically.
2 RELATED WORK
In this section, we review several groups of related work in the areas of AutoML, in-modal transfer learning (unimodal domain adaptation, unimodal/multimodal fine-tuning, and general purpose methods), and cross-modal transfer learning (heterogeneous domain adaptation, task-specific finetuning, and FPT). Table 1 summarizes these groups along relevant axes, and contrasts them to ORCA.
AutoML for diverse tasks is a growing research area, as evidenced by the NAS-Bench-360 benchmark (Tu et al., 2022), along with several recent neural architecture search (NAS) methods that target this problem, e.g., AutoML-Zero (Real et al., 2020), XD (Roberts et al., 2021), and DASH (Shen et al., 2022)). In contrast to these NAS methods, ORCA takes a transfer learning approach in order to leverage existing pretrained models from data-rich modalities for more esoteric tasks, rather than repeatedly incurring the overhead of designing new architectures and training them from scratch. That said, given the shared underlying motivation, our experimental evaluation makes use of the diverse
1We do not assume access to the pretraining data due to practical concerns about data access and computational efficiency. We instead work with publicly available proxy data from the pretraining modality, e.g., CIFAR-10 for models pretrained on ImageNet and CoNLL-2003 for models pretrained on larger text corpora.
tasks comprising NAS-Bench-360, and compares ORCA with its expert and AutoML baselines. We also compare against DASH, the state-of-the-art method on this benchmark.
Unimodal domain adaptation (DA) is a form of transductive transfer learning where the source and target tasks are the same but the domains differ (Pan & Yang, 2009; Wang & Deng, 2018). Many DA methods assume that the target data has the same input space and support as the source data, and are concerned with problems where the output spaces and the joint/marginal distributions differ, such as covariate and label shifts. Recent work considers more general settings such as different feature spaces (heterogeneous DA) or label spaces (universal DA). Our focus on cross-modal transfer goes one step further to the case where neither the input-space nor the output-space support overlaps.
Unimodal fine-tuning is a more flexible transfer approach that can be applied to downstream tasks with different label spaces or input spaces. Pretrained models are used for in-modality fine-tuning in NLP (e.g., Aghajanyan et al., 2021; Jiang et al., 2020), vision (e.g., Wei et al., 2022; Li et al., 2022), speech (e.g., Chen et al., 2022; Jiang et al., 2021), protein sequences (Jumper et al., 2021), and robotics (Ahn et al., 2022). Adapter networks (He et al., 2022) have been developed to improve the downstream performance of in-modality transfer. Multimodal fine-tuning expands the applicable modalities of a single pretrained model by learning embeddings of several data-rich modalities together (e.g., Lu et al., 2019; Radford et al., 2021; Hu & Singh, 2021; Kim et al., 2021; Alayrac et al., 2022). However, these approaches still focus on solving in-modality downstream tasks.
General-purpose models propose flexible architectures applicable to various tasks such as optical flow, point clouds, and reinforcement learning (Jaegle et al., 2021; 2022a; Reed et al., 2022). These approaches train multitask transformers from scratch using a large body of data from different tasks. Though more versatile than unimodal models, they still focus on transferring to problems within the pretraining modalities considered. Nonetheless, the success of transformers for in-modality finetuning motivates us to focus on adapting transformer-type architectures for cross-modal transfer.
Heterogeneous DA (HDA) considers nonequivalent feature spaces between the source and target domains. While most HDA methods are developed for same-modality-different-dimension transfer, e.g., between images of different resolutions, there are indeed a few works studying cross-modal tasks such as text-to-image (Yao et al., 2019; Li et al., 2020b). However, a crucial assumption that HDA makes is that the target and source tasks are the same. Thus, we operate in a much more flexible setting and consider knowledge transfer between drastically different domains with distinct tasks and label sets, such as applying Swin Transformers (Liu et al., 2021c) to solving partial differential equations or RoBERTa to classifying satellite images and electrocardiograms.
Cross-modal, task-specific fine-tuning is a recent line of research, with most work focusing on transferring NLP models to other modalities like vision (Kiela et al., 2019), referential games (Li et al., 2020c), and reinforcement learning (Reid et al., 2022). These works provide initial evidence of the cross-modal transfer capacity of pretrained models. However, they focus on hand-tailoring to a single modality, e.g., by adding ad-hoc encoders that transform agent messages (Li et al., 2020c) or decision trajectories (Reid et al., 2022) into tokens. Even when not relying on fine-tuning, work like LIFT (Dinh et al., 2022) that attempts cross-modal learning via prompting (Liu et al., 2021a) still require ad-hoc conversion of tasks to natural text.
Frozen Pretrained Transformers (FPT) (Lu et al., 2022b) is a general cross-modal fine-tuning workflow that transforms input features to be compatible with the pretrained models. Although FPT and ORCA are both general-purpose workflows, FPT does not account for differences between the target and pretraining modalities, which we show is necessary to achieve accurate predictive models and outperform existing baselines.
3 ORCA WORKFLOW
In this section, we first formalize the problem setup and then introduce the ORCA workflow for adapting pretrained transformers to diverse end tasks.
Problem Setup. A domain D consists of a feature space X , a label space Y , and a joint probability distribution P (X ,Y). In the cross-modal setting we study, the target (end-task) domain Dt and source (pretraining) domain Ds differ not only in the feature space but also the label space and by extension have differing probability distributions, i.e., X t ̸= X s, Yt ̸= Ys, and P t(X t,Yt) ̸= P s(X s,Ys). This is in contrast to the transductive transfer learning setting addressed by domain adaptation, where source and target domains share the label space and end task (Pan & Yang, 2009).
Given target data {xti, yti}i∈[nt] sampled from a joint distribution P t in domain Dt, our goal is to learn a model mt that correctly maps each input xt to its label yt. We are interested in achieving this using pretrained transformers. Thus, we assume access to a model ms that has been trained with data {xsi , ysi }i∈[ns] in the source domain Ds, where (xsi , ysi ) ∼ P s. Then, given a predefined loss function l, we aim to develop mt based on ms such that L(mt) = E(xt,yt)∼P t [l(mt(xt), yt)] is minimized. This problem formulation does not define modality explicitly and includes both inmodal and cross-modal transfer. Given the generality of the tasks we wish to explore, it is hard to provide a precise mathematical definition, so we rely on semantics to differentiate the two settings: intuitively, cross-modal domains (e.g., natural images vs. protein sequences) are more distinct to each other than in-modal domains (e.g., photos taken in two different geographical locations).
Having defined the learning problem, we now present our three-stage cross-modal transfer workflow: (1) architecture design to support diverse input-output dimensions, (2) embedder pretraining to align the source and target feature distributions, and (3) fine-tuning to minimize the target task loss.
3.1 TASK-SPECIFIC ARCHITECTURE DESIGN
Applying pretrained models to another downstream problem usually requires addressing the problem of mismatched dimensions. To make ORCA work for input and output tensors of different dimensions, we decompose a transformer-based learner m into three parts (Figure 1 stage 1): an embedder f that transforms input x into a sequence of features, a model body g that applies a pretrained transformer (i.e., series of attention layers) to the embedded features, and a predictor h that generates predictions with the desired output shape. ORCA uses pretrained architecture and weights to initialize the model body g but replaces f and h with layers designed to match the target data with the pretrained model’s embedding dimension. Next, we describe each module in detail.
Custom Embedding Network. Denote the feature space compatible with the pretrained model body as Ẋ . For a transformer with maximum sequence length S and embedding dimension D, Ẋ = RS×D. The embedding network f : X → Ẋ is designed to take in a tensor of arbitrary dimension from X and transform it to the feature space Ẋ . In ORCA, f is composed of a convolutional layer with input channel cin, output channel cout, kernel size k, and stride k, generalizing the patching operations used in vision transformers to 1D and higher-dimensional cases. We set cin to the input channel of x and cout to the embedding dimension D. To take full advantage of the representation power of the pretrained model, we choose the smallest k for which the product of output shape excluding the channel dimension ≤ S. That is, when we flatten the non-channel dimensions of the output tensors after the convolution, pad and then transpose it, we can obtain sequence features with shape S ×D. Finally, we add a layer norm and a positional embedding to obtain ẋ.2
Pretrained Transformer Body. The model body g takes the embedding ẋ ∈ Ẋ as input and outputs features ẏ ∈ Ẏ; the dot is used to differentiate these intermediate representations from the raw inputs and labels. For transformer-based g, both the input and output feature spaces Ẋ , Ẏ are RS×D.
Custom Prediction Head. Finally, the prediction head h must take ẏ ∈ Ẏ as input and return a task-dependent output tensor. Different tasks often specify different types of outputs, e.g., classification tasks require logits in RK where K is the number of classes, and dense prediction tasks require dense maps with the same spatial dimension as the input and per index logits correspond-
2As a concrete example, consider an image tensor with shape (Cin, Hin,Win). We first choose stride k for the convolution such that Hout × Wout ≈ S to get an output tensor with shape (D,Hout,Wout). Then, we flatten it to shape (D,Hout ×Wout), pad along the last dimension to shape (D,S), and transpose.
ing to K classes. Thus, it is crucial to define task-specific output modules and fine-tune them when transferring to new tasks. In our workflow, we use the simplest possible instantiation of the predictor modules. For classification, we apply average pooling along the sequence length dimension (or take the classification token of language models) to obtain 1D tensors with length D and then use a linear layer that maps D to K. For dense prediction, we apply a linear layer to the sequence outputs so the resulting tensor has shape (S, kdim(Y)−1K), where kdim(Y)−1 is the downsampling factor of the embedder convolution kernel with stride k. This upsamples by the same factor that the embedder convolution downsampled. Then, we can mold the tensor to the desired output dimension.3
With an architecture based on the pretrained model but is also compatible with our target task, we can now turn our attention to pretraining the embedder via matching source and target distributions.
3.2 EMBEDDING LEARNING DOR DATA ALIGNMENT
Intuitively, transferring knowledge across similar modalities should be easier than across distant ones. Hence, given a target task in a new modality, we aim to manipulate the task data so that they become closer to the pretraining modality. We use the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) to measure the closeness between datasets in different domains. Unlike the classic OT distance which operates only on the feature distributions, OTDD considers both the feature and the label information and can work even if the label sets are unrelated or disjoint. Thus, while OT is mostly used for unsupervised or semi-supervised domain adaptation (Courty et al., 2017; Yan et al., 2018), OTDD is particularly suitable for distance estimation between cross-modal labeled datasets. In the following, we will briefly explain how ORCA uses OTDD. We refer the readers to Alvarez-Melis & Fusi (2020) for a detailed exposition on the metric itself.
Formally, let fs : X s → Ẋ denote the pretrained embedder (the part of ms that transforms the source data to sequence features) and f t : X t → Ẋ be a randomly initialized target embedder with architecture discussed in the previous section. We train f t to minimize the expected OTDD between the embedding-label distributions ( f t(xt), yt ) and ( fs(xs), ys ) . That is, for both datasets,
we first represent each class label as a distribution over the in-class features: y 7→ P (Ẋ |Y = y). This transforms the source and target label sets into the shared space of distributions over Ẋ . Then, we can define the distance dY(yt, ys) between different labels using the p-Wasserstein distance associated with a metric dẊ over the feature space, e.g., the l2 distance ∥ẋt − ẋs∥22. This allows us to measure the difference between distributions in Ẋ ×Y using the following p-Wasserstein metric:
dẊ×Y ( (ẋt, yt), (ẋs, ys) ) = ( dẊ (ẋ t, ẋs)p + dY(y t, ys)p )1/p . (1)
Plugging this into the OT formulation leads to the OTDD over Ẋ ×Y , which we optimize to learn f t. Leveraging the clustering structure of the datasets, OTDD provides a better distance estimation between the target and source data, demonstrating better alignment ability in practice. In Section 4.2.1, we examine several distance metrics for learning the embedder. While data alignment generally improves downstream performance for all metrics, OTDD leads to the best empirical results.
As for the computational cost of embedding learning, we analyze the complexity of OTDD in Appendix A.1 and show that this stage takes much less time than the later fine-tuning stage in Appendix A.5. Thus, ORCA achieves a significant performance gain at a low cost for feature alignment.
3.3 FINE-TUNING FOR DOWNSTREAM ADAPTATION
After training the embedder, we perform full fine-tuning by updating all model parameters to minimize the target loss. This step further aligns the embedder and predictor with the pretrained model to improve downstream performance. We perform an ablation study comparing ORCA to standard fine-tuning without feature matching in Section 4.2.1 and show that our approach improves prediction accuracy and reduces performance variance. There are orthogonal lines of work that study how to best fine-tune a pretrained model (e.g., Liu et al., 2022; He et al., 2022). We compare with one strategy used in FPT (Lu et al., 2022b) in Section 4.2.2 but leave further exploration for future work.
3As a concrete example, for an image tensor with embedding convolution kernel size k, the linear layer will yield an output of shape (S, k2K), which we transpose, pad, and reshape to (k2K,Hout,Wout). Finally, we apply pixelshuffle (Shi et al., 2016) to get an output of shape (K,Hin,Win).
4 EXPERIMENTS
Having introduced how ORCA addresses the dimension mismatch between the target and source datasets via architecture design and tackles the distribution mismatch via embedder learning, we now proceed with showing its empirical effectiveness. In the following, we will demonstrate that ORCA is the first approach that allows practitioners to obtain models better than hand-designed, AutoMLsearched, and general-purpose architectures on a variety of diverse tasks. Then, we will analyze key components of ORCA to better understand the mechanism underlying cross-modal transfer.
Experiment Protocol. While our workflow accepts a wide range of pretrained transformers as model bodies, we use RoBERTa (Liu et al., 2019b) and Swin Transformers (Liu et al., 2021c), which are representatives of the most studied language and vision modalities, to exemplify ORCA’s efficacy. We implement the base models, which have around 100 million parameters, and use the pretrained weights available in the Hugging Face transformers library (Wolf et al., 2019). As stated in the introduction, we do not use the exact pretraining data to represent the source modalities because they are often not publicly available and can be too large to compute OTDD efficiently. We use the proxy datasets: CoNLL-20034 for RoBERTa and CIFAR-10 for Swin.
For each task, we first apply the hyperparameter tuning algorithm ASHA (Li et al., 2020a) to the standard fine-tuning baseline (“Fine-tuning” in Table 3) to identify suitable batch size, optimizer, learning rate, and weight decay. These hyperparameters are then applied to all fine-tuning baselines as well as ORCA. During embedder learning, while classification tasks naturally come with discrete labels required for computing OTDD, for dense prediction tasks where labels are high-dimensional maps, we perform clustering on the dense maps to generate pseudo labels, which not only preserves the intrinsic distribution of the target data but also speeds up OTDD computation. We manage our experiments using the Determined AI platform. All experiments are performed on NVIDIA V100 GPUs and results are averaged over 5 random seeds. For other experiment details, see Appendix A.3.
4.1 CAN PRETRAINED MODELS TRANSFER ACROSS MODALITY TO SOLVE DIVERSE TASKS?
In this section, we highlight the most important observation of this work: cross-modal fine-tuning with ORCA can solve a variety of tasks effectively and efficiently. To demonstrate this, we evaluate ORCA on 13 tasks detailed below. We first include 10 tasks from NAS-Bench-3605, which covers problems such as PDE solving, protein folding, and cardiac disease detection. This benchmark contains tasks for 1D and 2D classification, 2D dense prediction, but not 1D dense prediction, so we added JSB Chorales, a music modeling dataset widely used for evaluating recurrent networks (Chung et al., 2017; Bai et al., 2018). We also added ListOps (parsing math expressions) (Tay et al., 2021) and Homology (classifying protein structure) (Rao et al., 2019) for comparison with FPT. Together, these 13 tasks represent a wide collection of modalities for comprehensive evaluation.
Following the taxonomy in Table 1, we consider three classes of baselines: (1) hand-designed expert architectures for each task, as identified by Tu et al. (2022), Rao et al. (2019), and Tay et al. (2021); (2) generalpurpose models, as represented by Perceiver IO (Jaegle et al., 2022b); and (3) AutoML baselines, as represented by those evaluated in NAS-Bench-360 and DASH (Shen et al., 2022). We will compare with FPT later, the only remaining approach with a general workflow from Table 1.
In Table 2, we report the prediction error for each method on each task. ORCA achieves the lowest error rate on 10 of 13 tasks and is the most effective in terms of aggregated performance. This is also supported by the performance summary in Figure 2. More specifically, we outperform all hand-designed architectures on all tasks except ECG, where we rank second but do much better than the other
4CoNLL-2003 is for named entity recognition. It is used to interpret language models (Jawahar et al., 2019). 5NAS-Bench-360 is designed for testing how well ML algorithms can generalize and is a core component
of the 2022 AutoML Decathlon competition. For a summary of included tasks, see Table 6 in the Appendix
methods. We also beat all AutoML baselines on all tasks except DeepSEA and NinaPro, where ORCA is second and third, respectively. The improvements from ORCA come at a small computational overhead associated with pretraining the embedder to match the source and target modalities. Table 5 in the Appendix shows the time needed for embedder learning with OTDD, which is a small portion (10.2% on average) of the fine-tuning time. ORCA’s efficiency and its state-of-the-art results on 10 tasks make it a practical tool for model development in diverse areas.
Our experiments further validate the findings in Lu et al. (2021) that pretrained transformers can learn knowledge transferable to seemingly unrelated tasks. In the following, we delve into the mechanism of ORCA to provide intuition for necessary components of successful cross-modal learning.
4.2 KEY FACTORS FOR SUCCESSFUL CROSS-MODAL TRANSFER
Here, we dissect the success of cross-modal transfer with ORCA through a series of ablation studies. As a preview, we identify three aspects to be key to its success: data alignment through embedding learning, full fine-tuning of all model weights, and suitable pretrained model selection.
4.2.1 MATCHING FEATURE DISTRIBUTIONS HELPS ADAPTATION
Table 2 shows that ORCA leads to effective transfer using the proposed three-stage workflow. However, as learning the embedder via OTDD is an instantiation of the general first-align-then-fine-tune paradigm, we ask the question: does cross-modal transfer work because of the specific application of OTDD or the core approach of data alignment across modalities?
To answer this question, we perform an ablation study on the embedding learning metrics and compare their performance to fine-tuning without embedder pretraining. We experiment with OTDD, maximum mean discrepancy (MMD) (Gretton et al., 2012), the mutual-informationbased TransRate (Huang et al., 2022), and Euclidean distance. The performance profile is in shown Figure 3, and the detailed results are shown in Appendix A.4. We highlight the following observations. First, bringing the target modality closer to the pretraining modality generally aids cross-modal transfer, regardless of which metric we minimize. This is evident from the fact that pretraining the embedder with any of the metrics can outperform vanilla fine-tuning without embedder learning on many
tasks. Second, among the evaluated metrics, OTDD leads to the best overall performance. This is why we use it in our workflow. The middle rows of Table 3 demonstrate that ORCA with OTDD consistently outperforms naive fine-tuning. This supports our argument that closing the gap between a new modality and the pretraining modality can facilitate a model’s adaptation to a new task.
To further isolate the impact of data alignment, we compare ORCA with a train-from-scratch baseline, which trains RoBERTa and Swin using only the target data for the same number of epochs. Table 3 shows that train-from-scratch is better than fine-tuning but worse than ORCA on many tasks like Satellite and DeepSEA, indicating that when the target modality differs significantly from the pretraining modality, naive fine-tuning may harm transfer, but aligning the feature distribution using ORCA can resolve this issue and benefit transfer. Indeed, recent work has shown that optimizing directly for the task loss may distort the pretrained weights and lead to suboptimal solutions (Kumar et al., 2022; Lee et al., 2022). By manipulating the target data distribution to look like the source distribution, we can lower the risk of catastrophic forgetting, which may explain the success of ORCA.
Lastly, we perform experiments to quantify the effect of data alignment from a task-wise perspective. We train the embedder for different number of epochs before fine-tuning to see how optimizing OTDD to various levels of convergence affects downstream performance. Figure 4 plots the finetuning accuracy along with the final OTDD objective for different levels of embedder pretraining. Evidently, as the dataset distance decreases, the final fine-tuning accuracy increases. This correlation supports the effectiveness of embedder learning for cross-modal transfer. In addition, we observe that learning the embedder prior to fine-tuning can stabilize training, as the performance variance of ORCA is consistently lower than that of standard fine-tuning.
4.2.2 FINE-TUNING ALL MODEL PARAMETERS FOR CROSS-MODAL TASKS
As discussed in Section 2, Frozen Pretrained Transformers (FPT) (Lu et al., 2022b) is a related work that showed pretrained language models contain knowledge relevant for out-of-modality tasks. While FPT presented a general pipeline that transfers GPT-2 to tasks like CIFAR-10, Homology, and ListOps, the resulting models were not as good as those directly trained on the target data. FPT differs from ORCA in that (1) it does not pretrain an embedder for task-specific adaptation and (2) it only fine-tunes the layer norms. We have already verified the importance of (1). Now, to isolate the impact of (2), we evaluate ORCA with fine-tuning the layer norms vs. FPT on our task set.
The bottom rows of Table 3 show ORCA with fine-tuning just the layer norms outperforms FPT, indicating pretraining the embedding layers boosts the cross-modal performance of FPT. However, this performance gain is smaller than that seen in the full fine-tuning setting, which implies that full fine-tuning can take better advantage of the learned embeddings. Also, partial fine-tuning is less effective than full fine-tuning on all tasks except for DeepSEA. This exception might be due to the fact that full fine-tuning without learned embeddings is more prone to overfitting. In terms of runtime, FPT only results in less than 2x speedups compared with full fine-tuning (see Appendix A.5), despite the fact that we are updating significantly fewer parameters. This is unsurprising since gradients are still back-propagated through the entire network. Therefore, when computation allows, we recommend using ORCA with full fine-tuning for better downstream performance.
4.2.3 PRETRAINING MODALITY CAN AFFECT TRANSFER PERFORMANCE
Finally, we study how the pretraining modality affects fine-tuning performance. For experiments in Table 2, we chose pretrained models for each task based on the input dimension, i.e., we use RoBERTa for all 1D tasks and Swin for all 2D tasks. Now, we can switch the model bodies and apply ORCA. This is easy to implement because ORCA is model-agnostic and the embedder architec-
ture handles all necessary input transformation to obtain sequence features. As shown in Table 4, fine-tuned RoBERTa outperforms fine-tuned Swin on the 1D task, and the final OTDD objective for RoBERTa is also smaller than that of Swin. We hypothesize that this is because the considered DeepSEA data (genomics sequences) are structured more like language than images with discrete units of information and general grammatical rules. The FPT paper observes a similar trend for Homology. As for the 2D tasks, we again notice that models with better fine-tuning accuracy have smaller OTDDs. This suggests a way of selecting pretrained models from a predefined model hub for each task, e.g., by comparing the optimized OTDDs and picking the one with the smallest value.
Case Study: Low-Data Regime. Now that we have a better understanding of ORCA, recall that one of our motivations for transferring pretrained models to various modalities is to help task-solving in data-limited regimes, where training models from scratch can be challenging. To this end, we investigate whether ORCA can facilitate fine-tuning large-scale models on small target datasets.
Indeed, for vanilla fine-tuning, a small amount of data may not give enough signal to update the pretrained weights. However, it is possible to obtain a good feature embedder with the same amount of data using ORCA, which can then make fine-tuning easier. In Figure 5, we vary the amount of target data and plot the performance of ORCA and vanilla fine-tuning. The performance gain of ORCA increases as the amount of data used decreases. This shows that fine-tuning does suffer from limited data, but ORCA can considerably alleviate the problem and improve downstream performance. Moreover, ORCA allows us to use a third of the data to match the performance of standard fine-tuning. Thus, it can benefit model development in domains where data collection is costly.
Discussion and Future Work. We identify several future directions based on our experiment results. First, it is worth studying the effect of pretraining modality further and develop a systematic way of selecting pretrained models. Then, we can incorporate model selection into ORCA for a more automated transfer pipeline. Second, while ORCA leverages the simplest fine-tuning paradigm, we believe it is possible to combine it with more sophisticated transfer techniques such as adapters (He et al., 2022). We briefly study how prompting (Bahng et al., 2022; Jia et al., 2022) can be applied to diverse tasks in Appendix A.6 and find that it is in general less effective for out-of-modality problems, so we can possibly boost its performance using ORCA. Lastly, we currently evaluate ORCA on diverse 1D/2D tasks and in-modality vision tasks (Appendix A.7). It is also important to validate it on more settings, such as high-dimensional problems and reinforcement learning (Reid et al., 2022).
5 CONCLUSION
In this paper, we argue that an important step towards developing more general ML methods is to study how we can reuse existing models effectively for new and less-explored tasks. To this end, we propose a novel framework that allows transferring pretrained transformers to distinct downstream modalities. Our method, ORCA, can map target data from an arbitrary end task’s modality to a model’s pretraining modality to improve fine-tuning performance. We believe that this work not only signals the potential of large-scale pretraining for diverse tasks but also lays out a path for a largely uncharted data-centric paradigm in machine learning.
A APPENDIX
A.1 EMBEDDING LEARNING WITH OPTIMAL TRANSPORT DATASET DISTANCE
A.1.1 LITERATURE REVIEW
Due to the limited space, we do not give a full review of the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) in the main text. Here, we briefly recall the optimal transport (OT) distance and explain OTDD in detail.
Consider a complete and separable metric space X and let P(X ) be the set of probability measures on X . For α, β ∈ P(X ), let Π(α, β) be the set of joint probability distributions on X × X with marginals α and β in the first and second dimensions respectively. Then given a cost function c(·, ·) : X × X → R+, the classic OT distance with cost c is defined by:
OTc(α, β) := min π∈Π(α,β) ∫ X×X c(x, y)dπ(x, y). (2)
When X is equipped with a metric dX , we can use c(x, y) = dX(x, y)p for some p ≥ 1 and obtain the p-Wasserstein distance, Wp(α, β) := (OTdpX (α, β)) 1 p .
Now consider the case of finite datasets with features in X and labels in a finite set Y . Each dataset can be considered a discrete distribution in P(X × Y). To define a distance between datasets, a natural approach is to define an appropriate cost function on Z := X × Y and consider the optimal transport distance. Indeed, for any metric dY on Y and any p ≥ 1, Z can be made a complete and separable metric space with metric
dZ((x, y), (x ′, y′)) = (dX (x, x ′)p + dY(y, y ′)p) 1 p (3)
It is usually not clear how to define a natural distance metric in Y , so instead we proceed by representing each class y ∈ Y by P (X|Y = y), the conditional distribution of features X given Y = y. More specifically, for a dataset D ∈ P(X × Y), denote this map from classes to conditional distributions by F (D, ·) : Y → P(X ). Then we can transform any dataset over X × Y into one over X × P(X ) via G(D) := (projX , F (D, projY )). As discussed above, Wp is a natural notion of distance in P(X ), so by substituting Y 7→ P(X ) and dY 7→ Wp in Equation 3, we can define the (p-)optimal transport dataset distance between datasets DA and DB by
OTDD(DA,DB) := OT (dpX×W p p ) 1 p (G(DA), G(DB)) (4)
A.1.2 COMPUTATIONAL CONSIDERATIONS
As we aim for a practical fine-tuning workflow, computational cost is a crucial concern. While Alvarez-Melis & Fusi (2020) proposed two variants of OTDD—the exact one and a Gaussian approximation, we observe from our experiments that optimizing the exact OTDD leads to better performance. In the following, we will focus on analyzing the computational cost of the exact OTDD.
Given datasets with D-dimensional feature vectors, estimating vanilla OT distances can be computationally expensive and has a worst-case complexity of O(D3 logD) (Pele & Werman, 2009). However, adding an entropy regularization term ϵH(π|α⊗β) to Equation 2, where H is the relative entropy and ϵ controls the time-accuracy trade-off, can be solved efficiently with the Sinkhorn algorithm (Cuturi, 2013). This reduces OT’s empirical complexity to O(D2) and makes the time cost for computing OTDD manageable for ORCA’s workflow.
During implementation of ORCA, we also observed memory issues for computing OTDD using the entire target and source datasets on GPUs. To alleviate this, we propose a class-wise subsampling strategy for approximating OTDD on GPUs (Algorithm 1). In short, we split the K-class target dataset into K datasets based on the labels and compute the class-wise OTDD between each singleclass target dataset and the entire source dataset. Each class-wise OTDD can be approximated with the average of batch samples similar to how stochastic gradient descent approximates gradient descent. After that, we approximate the OTDD between the target and source datasets using the
Algorithm 1 Efficient approximation of OTDD using class-wise subsampling.
Input: target dataset {xt, yt}, number of target classes Kt, source dataset S = {xs, ys}, subsample size b, subsample round R for each class i ∈ [Kt] in the target dataset do
weighted sum of the K class-wise OTDDs. To verify that the approximation works empirically, we track the approximated OTDD (computed on GPUs) and the actual OTDD (computed on CPUs) and visualize the loss curves during ORCA’s embedder learning process (Figure 6). We can see that the estimated value adheres to the actual value.
Leveraging both the Sinkhorn algorithm and class-wise approximation, the embedder learning process only takes up a small fraction of the total fine-tuning time in practice, as shown in Table 5. Hence, we invest a reasonable time budget but achieve significantly improved cross-domain transfer performance using ORCA.
A.2 INFORMATION ABOUT EVALUATION TASKS
A.3 EXPERIMENT DETAILS
Below, we summarize details for implementing ORCA and evaluating it on the selected 13 tasks. The code and configuration file for reproducing each experiment can be found in the supplementary material. We will also release ORCA’s best checkpoint for each task later.
A.3.1 PRETRAINED MODELS
We evaluated ORCA with two pretrained models in our experiments. In Table 2, for all 2D tasks including CIFAR-100, Spherical, Darcy Flow, PSICOV, Cosmic, NinaPro, and FSD50K, we use the following model. As Swin has a pretrained resolution, we reshape the inputs for our tasks to the resolution before feeding them into the model.
Name Pretrain Resolution Num Params FLOPS FPS
Swin-base (Liu et al., 2021c) ImageNet-22K 224×224 88M 15.4G 278
For all 1D tasks including ECG, Satellite, DeepSEA, JSB Chorales, ListOps,and Homology, we use the following model:
Name Pretrain Num Params FLOPS
RoBERTa-base (Liu et al., 2019b) Five English-language corpora 125M 1.64E20
We use the Hugging Face transformers library Wolf et al. (2019) to implement the pretrained models.
A.3.2 TASK DATA PREPARATION
For all the NAS-Bench-360 tasks, each dataset is preprocessed and split using the script available on https://github.com/rtu715/NAS-Bench-360, with the training set being used for hyperparameter tuning, embedding learning, and fine-tuning. We obtain the data processing script for JSB data from https://github.com/locuslab/TCN, for ListOps from https://github.com/kzl/universal-computation, and for Homology from https://github.com/songlab-cal/tape.
A.3.3 HYPERPARAMETER TUNING
As ORCA is both task-agnostic and model-agnostic, it can be applied to fine-tuning a variety of pretrained transformers on drastically different end tasks with distinct datasets. Hence, it is hard to define one set of fine-tuning hyperparameters for all (model, task) pairs. At the same time, optimizing large-scale pretrained transformers can be challenging due to their large model sizes, as the downstream performance depends largely on the hyperparameters used. For instance, using a large learning rate can distort pretrained weights and lead to catastrophic forgetting. Therefore, in our experiments, given a (model, task) pair, we first apply hyperparameter tuning using the Asynchronous Successive Halving Algorithm (ASHA) (Li et al., 2020a) to the standard fine-tuning setting (i.e., after initializing the embedder and predictor architectures, directly updating all model weights to minimize the task loss) to identify a proper training configuration. Then, we use the same set of hyperparameters found for all our experiments for the particular (model, task) combination. Note that even though we did not explicitly state this in the main text, the hyperparameter tuning stage can be directly integrated into the ORCA workflow between stage 1 and stage 2. In this sense, ORCA is still an automated cross-modal transfer workflow that works for diverse tasks and different pretrained models.
The configuration space for ASHA is as follows:
• Batch size: 32, 128, 512, 1024 for Swin; 16, 56, 256, 512 for RoBERTa
• Optimizer: SGD, Adam, AdamW
• Learning rate: 1E-2, 1E-3, 1E-4, 1E-5, 1E-6
• Weight decay: 1E-2, 1E-3, 1E-4, 1E-5, 1E-6
Note that to fit each experiment on a single GPU, we set a fixed batch size (32 for Swin and 16 for Roberta) and vary the gradient accumulation step instead of actually varying the batch size, but the effect is the same.
A.3.4 ORCA ONLY: EMBEDDING LEARNING WITH OTDD
After initializing the embedder architecture for each task, we train it to minimize the OTDD between the embedded target features and embedded source features.
For source datasets, we use CIFAR-10 for Swin and CONLL-2003 for RoBERTa. We sample 5000 data points to compute OTDD. In practice, we can pass the source data through the pretrained embedder once and save all the embedded features, so we don’t have to pay the cost of obtaining the source features each time we fine-tune a new model.
For classification tasks, we directly use the labels provided by the end task to compute OTDD. For dense tasks, we perform K-Means clustering on the target data to obtain pseudolabels for OTDD computation. The number of clusters is set to the number of classes of the source dataset, e.g., 10 for 2D tasks that use CIFAR-10 as the source dataset.
To compute the embedding learning objective, we use the OTDD implementation of the original paper provided here: https://github.com/microsoft/otdd. As for the hyperparameters, we use the batch size, learning rate, optimizer, and weight decay obtained from A.3.3. The others are fixed across different tasks:
• Embedding learning epochs: 60
• Learning rate scheduler: decay by 0.2 every 20 epochs
A.3.5 FINE-TUNING
Besides the searched hyperparameters, we also fix the following hyperparameters for fine-tuning.
• Fine-tuning epochs: 100 for Swin tasks, 60 for RoBERTa tasks • Learning rate scheduler: we use the linear decay with min lr = 0 and 5 warmup epochs
A.3.6 TRAIN-FROM-SCRATCH
This baseline is trained using the same hyperparameter configuration (number of epochs, batch size, learning rate, etc) as the fine-tuning baseline.
A.3.7 EVALUATION
When training/fine-tuning is finished, we evaluate the performance of all models following the NASBench-360 protocol. We first report results of the target metric for each task by running the model of the last epoch on the test data. Then, we report aggregate results via performance profiles (Dolan & Moré, 2002), a technique that considers both outliers and small performance differences to compare methods across multiple tasks robustly. In such plots, each curve represents one method. The τ on the x-axis denotes the fraction of tasks on which a method is no worse than a τ -factor from the best. The performance profile for our experiments is shown in Figure 2.
A.4 ABLATION STUDY ON EMBEDDING LEARNING METRICS
As motivated in Section 4.2.1, we present here an ablation study on the embedding learning metrics that we have considered for minimizing distribution dissimilarity. The results show that (1) performing feature alignment generally helps downstream adaptation, regardless of which metric we minimize; (2) OTDD leads to the best overall performance, so we chose it for our workflow. Our findings confirm that it is the general idea of data alignment, rather than a specific metric, that makes cross-modal transfer work.
Specifically, we experiment with OTDD, maximum mean discrepancy (MMD) (Gretton et al., 2012), the mutual-information-based TransRate (Huang et al., 2022), and (pairwise) Euclidean distance. We learn the embedders to minimize these metrics and then fine-tune the pretrained models. The test errors are as follows.
A.5 RUNTIME OF ORCA VS. FPT
In Table 3, we compare with the FPT setting, which only fine-tunes the layer norms of the pretrained transformer models. As we have shown already, the downstream performance of fine-tuning only a subset of the parameters is less competitive than fine-tuning all parameters. Below, we show that the time saved for updating only layer norms is also not that significant. Therefore, we suggest performing full fine-tuning when time and computational resources allow.
A.6 PROMPTING
Apart from fine-tuning, a new paradigm of working with large-scale pretrained models is prompting, i.e., we do not update the pretrained weights but only modify the input and query the model for the desired output. Existing language prompting methods (e.g., Liu et al., 2022) are generally not suitable for cross-modal learning due to the difficulty of designing natural prompts for diverse data types. For the 1D tasks we study, there is even no notion of “discrete tokens.” Another line of work studies visual prompting by modifying 2D inputs for querying vision transformers. We test two such algorithms, VP (Bahng et al., 2022) and VPT (Jia et al., 2022), on three classification tasks in our task suite. They are not applicable to the remaining tasks because either the inputs cannot be reshaped to look like images or the outputs are not classification logits.
We test VPT with the pretrained Swin-Base Transformer (the same model we used for ORCA) and VP with the pretrained ResNet-50 (as the official implementation does not support vision transformers). The results are shown in Table 9. In general, prompt tuning is less effective than fine-tuning, and the two baselines perform significantly worse than ORCA. This is not surprising given that prompting methods are more intuitively suited to in-modality transfer, where the target and the source data have similar structure or semantic meaning. However, when the target data (e.g., electromyography signals, as in the NinaPro dataset) is drastically different from image data, it is difficult to design prompts or expect good performance by only modifying the inputs without fine-tuning the pretrained models.
A.7 COMPATIBILITY WITH IN-MODALITY TRANSFER
A natural question to ask is whether ORCA can also tackle in-modality tasks. While we design ORCA to enable cross-modal transfer, we hypothesize that it should facilitate same-modality transfer if two domains have large dataset distance. To validate this, we test ORCA on DomainNet
datasets, which are commonly used to evaluate homogeneous DA methods (Peng et al., 2019). From Table 10, we can see that ORCA achieves significantly better performance than the fine-tuning baseline, which shows that the feature matching of ORCA can also help in-domain generalization. | 1. What is the focus of the paper regarding cross-modal transfer learning?
2. What are the strengths of the proposed approach, particularly in its performance and compatibility with in-modality transfer?
3. What are the weaknesses of the paper, especially regarding its technical contribution and the limitation of the custom embedding network?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper focuses on cross-modal transfer learning that aims to utilize the pre-trained models for downstream tasks with diverse modalities. It devises a task specific embedding network to map data distribution to the pertained modality using the optimal transport dataset distance metric. Extensive experiments verify the efficacy of the proposed ORCA.
Strengths And Weaknesses
Strength
ORCA achieves good performance compared to other approaches on 13 tasks with different modalities, input dimensions, and prediction types.
The importance of matching feature distribution is well supported and it is compatible with the in-modality transfer.
Weaknesses
The technical contribution is not significant. Though the performance is good and the effect of OTDD is verified, the main idea of using OTDD as a target for data distribution mapping is somewhat trivial.
The custom embedding network uses a convolution layer for patch embedding like Vit. Does it only process image-like data? If so, it is inconsistent with the state as claimed before.
What is the effect of other metrics in Stage I?
Clarity, Quality, Novelty And Reproducibility
The key idea and the pipeline are described clearly. According to the detailed instructions, I believe that this idea is easy to reproduce. However, it lacks enough novelty since the whole pipeline is build on previous techniques. |
ICLR | Title
Tackling Diverse Tasks via Cross-Modal Transfer Learning
Abstract
Fine-tuning large-scale pretrained models has led to remarkable progress in wellstudied modalities such as vision and NLP. However, similar gains have not been observed in many other tasks due to an assumed lack of relevant pretrained models for these diverse modalities. In this work, we revisit this assumption by studying the cross-modal transfer ability of large-scale pretrained models. We introduce ORCA, a general cross-modal fine-tuning workflow that enables fast and automatic exploitation of existing pretrained models for diverse tasks. ORCA achieves taskspecific adaptation by performing data alignment before fine-tuning: it learns an embedding network that minimizes the optimal transport dataset distance between the end-task data and the pretraining data to close the modality gap. Through extensive experiments, we show that ORCA is the first viable approach that allows practitioners to use pretrained models to outperform hand-designed, AutoMLsearched, and general-purpose architectures—ORCA obtains state-of-the-art results on 10 of 13 diverse tasks we evaluate and ranks among the top three on the others. We shed light on why cross-modal transfer works by quantifying the importance of data alignment and highlight ORCA’s utility for data-limited domains.
1 INTRODUCTION
The success of machine learning (ML) in vision and natural language processing (NLP) has spurred its application beyond these traditional ML domains to diverse tasks such as solving partial differential equations (Li et al., 2021b), music modeling (Lewandowski et al., 2012), detecting cardiac disease (Hong et al., 2020), and many others. However, progress in these less-explored areas can be challenging due to (1) limited amounts of labeled data, (2) high computational cost and human effort for developing models from scratch, and (3) a lack of relevant large-scale pretrained models, which have in many cases obviated the first two issues in vision and NLP (e.g., Devlin et al., 2019; Carion et al., 2020; Dosovitskiy et al., 2021; Liu et al., 2021b; Radford et al., 2021).
There are two common approaches for practitioners to handle these issues: automated machine learning (AutoML) techniques (e.g., Roberts et al., 2021; Shen et al., 2022) that focus on designing task-specific networks in a data-efficient manner; and multimodal general-purpose methods that either propose flexible architectures applicable to various tasks (Jaegle et al., 2022a) or expand the set of modalities for which pretrained models exist (e.g., Reed et al., 2022; Lu et al., 2022a). However, both classes of approaches require training from scratch when applied to a new modality and proceed under the assumption of a lack of relevant pretrained models for these diverse problems.
In this work, we re-examine this assumption by considering the general problem of cross-modal transfer. Our goal is to exploit existing large-scale pretrained models in data-rich modalities for solving diverse downstream tasks. A few recent works have demonstrated the potential promise of cross-modal transfer by applying language transformers to vision (Kiela et al., 2019; Dinh et al., 2022; Lu et al., 2022b), referential games (Li et al., 2020c), and reinforcement learning (Reid et al., 2022). However, many of these approaches are ad-hoc (e.g., rely on manual prompt engineering or hand-craft new architecture components to solve specific tasks), and none of them yield models competitive with those trained from scratch. We tackle both shortcomings in our work.
We introduce a general-purpose, cross-modal transfer workflow called ORCA (Optimal tRansport Cross-modal Adaptation) that yields state-of-the-art results on a wide range of non-text and nonvision problems using pretrained transformers (Figure 1). Our key insight is to align the feature
distribution of an unfamiliar, out-of-modality dataset with that of a familiar, in-modal dataset before fine-tuning. This data alignment process not only prevents distortion of pretrained weights but also enables cross-modal knowledge transfer, as we will show via extensive experiments in Section 4. Concretely, for any downstream task, we first generate an embedding network that maps the (potentially high-dimensional) inputs to sequence features. Then, we train it to minimize the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) between the feature-label distribution of the target data and data from the pretraining domain1. Finally, we fine-tune the pretrained model and the embedding network. Using OTDD allows us to relax many distributional assumptions required by traditional domain adaptation and perform data alignment using both the feature and label information of the target data. However, we show in an ablation study in Section 4.2.1 that substituting OTDD with other distance metrics, such as maximum mean discrepancy (MMD) (Gretton et al., 2012), can also aid cross-modal transfer, albeit to a lesser extent. This implies that it is the general idea of first-align-then-fine-tune that enables ORCA to obtain significantly better results than previous cross-modal learning methods that rely on vanilla fine-tuning (Lu et al., 2022b).
We evaluate ORCA on a diverse set of 13 tasks with different input dimensions (1D and 2D), prediction types (point and dense), and modalities (vision, audio, electrocardiogram, physics, protein, genomics, cosmic-ray, and music). ORCA outperforms various competitors, including task-specific hand-designed architectures, leading AutoML methods, and general-purpose models, ranking first on 10 tasks and in the top three on all tasks. We compare ORCA with existing fine-tuning techniques and confirm that effective cross-modal transfer is only enabled by ORCA’s feature alignment process. We further reveal an empirical correlation between the alignment quality and the downstream performance. Finally, we demonstrate ORCA’s efficacy for limited-data tasks. Overall, our work not only explores the cross-modal transfer ability of pretrained models, but also establishes a practical workflow for solving diverse prediction problems efficiently and automatically.
2 RELATED WORK
In this section, we review several groups of related work in the areas of AutoML, in-modal transfer learning (unimodal domain adaptation, unimodal/multimodal fine-tuning, and general purpose methods), and cross-modal transfer learning (heterogeneous domain adaptation, task-specific finetuning, and FPT). Table 1 summarizes these groups along relevant axes, and contrasts them to ORCA.
AutoML for diverse tasks is a growing research area, as evidenced by the NAS-Bench-360 benchmark (Tu et al., 2022), along with several recent neural architecture search (NAS) methods that target this problem, e.g., AutoML-Zero (Real et al., 2020), XD (Roberts et al., 2021), and DASH (Shen et al., 2022)). In contrast to these NAS methods, ORCA takes a transfer learning approach in order to leverage existing pretrained models from data-rich modalities for more esoteric tasks, rather than repeatedly incurring the overhead of designing new architectures and training them from scratch. That said, given the shared underlying motivation, our experimental evaluation makes use of the diverse
1We do not assume access to the pretraining data due to practical concerns about data access and computational efficiency. We instead work with publicly available proxy data from the pretraining modality, e.g., CIFAR-10 for models pretrained on ImageNet and CoNLL-2003 for models pretrained on larger text corpora.
tasks comprising NAS-Bench-360, and compares ORCA with its expert and AutoML baselines. We also compare against DASH, the state-of-the-art method on this benchmark.
Unimodal domain adaptation (DA) is a form of transductive transfer learning where the source and target tasks are the same but the domains differ (Pan & Yang, 2009; Wang & Deng, 2018). Many DA methods assume that the target data has the same input space and support as the source data, and are concerned with problems where the output spaces and the joint/marginal distributions differ, such as covariate and label shifts. Recent work considers more general settings such as different feature spaces (heterogeneous DA) or label spaces (universal DA). Our focus on cross-modal transfer goes one step further to the case where neither the input-space nor the output-space support overlaps.
Unimodal fine-tuning is a more flexible transfer approach that can be applied to downstream tasks with different label spaces or input spaces. Pretrained models are used for in-modality fine-tuning in NLP (e.g., Aghajanyan et al., 2021; Jiang et al., 2020), vision (e.g., Wei et al., 2022; Li et al., 2022), speech (e.g., Chen et al., 2022; Jiang et al., 2021), protein sequences (Jumper et al., 2021), and robotics (Ahn et al., 2022). Adapter networks (He et al., 2022) have been developed to improve the downstream performance of in-modality transfer. Multimodal fine-tuning expands the applicable modalities of a single pretrained model by learning embeddings of several data-rich modalities together (e.g., Lu et al., 2019; Radford et al., 2021; Hu & Singh, 2021; Kim et al., 2021; Alayrac et al., 2022). However, these approaches still focus on solving in-modality downstream tasks.
General-purpose models propose flexible architectures applicable to various tasks such as optical flow, point clouds, and reinforcement learning (Jaegle et al., 2021; 2022a; Reed et al., 2022). These approaches train multitask transformers from scratch using a large body of data from different tasks. Though more versatile than unimodal models, they still focus on transferring to problems within the pretraining modalities considered. Nonetheless, the success of transformers for in-modality finetuning motivates us to focus on adapting transformer-type architectures for cross-modal transfer.
Heterogeneous DA (HDA) considers nonequivalent feature spaces between the source and target domains. While most HDA methods are developed for same-modality-different-dimension transfer, e.g., between images of different resolutions, there are indeed a few works studying cross-modal tasks such as text-to-image (Yao et al., 2019; Li et al., 2020b). However, a crucial assumption that HDA makes is that the target and source tasks are the same. Thus, we operate in a much more flexible setting and consider knowledge transfer between drastically different domains with distinct tasks and label sets, such as applying Swin Transformers (Liu et al., 2021c) to solving partial differential equations or RoBERTa to classifying satellite images and electrocardiograms.
Cross-modal, task-specific fine-tuning is a recent line of research, with most work focusing on transferring NLP models to other modalities like vision (Kiela et al., 2019), referential games (Li et al., 2020c), and reinforcement learning (Reid et al., 2022). These works provide initial evidence of the cross-modal transfer capacity of pretrained models. However, they focus on hand-tailoring to a single modality, e.g., by adding ad-hoc encoders that transform agent messages (Li et al., 2020c) or decision trajectories (Reid et al., 2022) into tokens. Even when not relying on fine-tuning, work like LIFT (Dinh et al., 2022) that attempts cross-modal learning via prompting (Liu et al., 2021a) still require ad-hoc conversion of tasks to natural text.
Frozen Pretrained Transformers (FPT) (Lu et al., 2022b) is a general cross-modal fine-tuning workflow that transforms input features to be compatible with the pretrained models. Although FPT and ORCA are both general-purpose workflows, FPT does not account for differences between the target and pretraining modalities, which we show is necessary to achieve accurate predictive models and outperform existing baselines.
3 ORCA WORKFLOW
In this section, we first formalize the problem setup and then introduce the ORCA workflow for adapting pretrained transformers to diverse end tasks.
Problem Setup. A domain D consists of a feature space X , a label space Y , and a joint probability distribution P (X ,Y). In the cross-modal setting we study, the target (end-task) domain Dt and source (pretraining) domain Ds differ not only in the feature space but also the label space and by extension have differing probability distributions, i.e., X t ̸= X s, Yt ̸= Ys, and P t(X t,Yt) ̸= P s(X s,Ys). This is in contrast to the transductive transfer learning setting addressed by domain adaptation, where source and target domains share the label space and end task (Pan & Yang, 2009).
Given target data {xti, yti}i∈[nt] sampled from a joint distribution P t in domain Dt, our goal is to learn a model mt that correctly maps each input xt to its label yt. We are interested in achieving this using pretrained transformers. Thus, we assume access to a model ms that has been trained with data {xsi , ysi }i∈[ns] in the source domain Ds, where (xsi , ysi ) ∼ P s. Then, given a predefined loss function l, we aim to develop mt based on ms such that L(mt) = E(xt,yt)∼P t [l(mt(xt), yt)] is minimized. This problem formulation does not define modality explicitly and includes both inmodal and cross-modal transfer. Given the generality of the tasks we wish to explore, it is hard to provide a precise mathematical definition, so we rely on semantics to differentiate the two settings: intuitively, cross-modal domains (e.g., natural images vs. protein sequences) are more distinct to each other than in-modal domains (e.g., photos taken in two different geographical locations).
Having defined the learning problem, we now present our three-stage cross-modal transfer workflow: (1) architecture design to support diverse input-output dimensions, (2) embedder pretraining to align the source and target feature distributions, and (3) fine-tuning to minimize the target task loss.
3.1 TASK-SPECIFIC ARCHITECTURE DESIGN
Applying pretrained models to another downstream problem usually requires addressing the problem of mismatched dimensions. To make ORCA work for input and output tensors of different dimensions, we decompose a transformer-based learner m into three parts (Figure 1 stage 1): an embedder f that transforms input x into a sequence of features, a model body g that applies a pretrained transformer (i.e., series of attention layers) to the embedded features, and a predictor h that generates predictions with the desired output shape. ORCA uses pretrained architecture and weights to initialize the model body g but replaces f and h with layers designed to match the target data with the pretrained model’s embedding dimension. Next, we describe each module in detail.
Custom Embedding Network. Denote the feature space compatible with the pretrained model body as Ẋ . For a transformer with maximum sequence length S and embedding dimension D, Ẋ = RS×D. The embedding network f : X → Ẋ is designed to take in a tensor of arbitrary dimension from X and transform it to the feature space Ẋ . In ORCA, f is composed of a convolutional layer with input channel cin, output channel cout, kernel size k, and stride k, generalizing the patching operations used in vision transformers to 1D and higher-dimensional cases. We set cin to the input channel of x and cout to the embedding dimension D. To take full advantage of the representation power of the pretrained model, we choose the smallest k for which the product of output shape excluding the channel dimension ≤ S. That is, when we flatten the non-channel dimensions of the output tensors after the convolution, pad and then transpose it, we can obtain sequence features with shape S ×D. Finally, we add a layer norm and a positional embedding to obtain ẋ.2
Pretrained Transformer Body. The model body g takes the embedding ẋ ∈ Ẋ as input and outputs features ẏ ∈ Ẏ; the dot is used to differentiate these intermediate representations from the raw inputs and labels. For transformer-based g, both the input and output feature spaces Ẋ , Ẏ are RS×D.
Custom Prediction Head. Finally, the prediction head h must take ẏ ∈ Ẏ as input and return a task-dependent output tensor. Different tasks often specify different types of outputs, e.g., classification tasks require logits in RK where K is the number of classes, and dense prediction tasks require dense maps with the same spatial dimension as the input and per index logits correspond-
2As a concrete example, consider an image tensor with shape (Cin, Hin,Win). We first choose stride k for the convolution such that Hout × Wout ≈ S to get an output tensor with shape (D,Hout,Wout). Then, we flatten it to shape (D,Hout ×Wout), pad along the last dimension to shape (D,S), and transpose.
ing to K classes. Thus, it is crucial to define task-specific output modules and fine-tune them when transferring to new tasks. In our workflow, we use the simplest possible instantiation of the predictor modules. For classification, we apply average pooling along the sequence length dimension (or take the classification token of language models) to obtain 1D tensors with length D and then use a linear layer that maps D to K. For dense prediction, we apply a linear layer to the sequence outputs so the resulting tensor has shape (S, kdim(Y)−1K), where kdim(Y)−1 is the downsampling factor of the embedder convolution kernel with stride k. This upsamples by the same factor that the embedder convolution downsampled. Then, we can mold the tensor to the desired output dimension.3
With an architecture based on the pretrained model but is also compatible with our target task, we can now turn our attention to pretraining the embedder via matching source and target distributions.
3.2 EMBEDDING LEARNING DOR DATA ALIGNMENT
Intuitively, transferring knowledge across similar modalities should be easier than across distant ones. Hence, given a target task in a new modality, we aim to manipulate the task data so that they become closer to the pretraining modality. We use the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) to measure the closeness between datasets in different domains. Unlike the classic OT distance which operates only on the feature distributions, OTDD considers both the feature and the label information and can work even if the label sets are unrelated or disjoint. Thus, while OT is mostly used for unsupervised or semi-supervised domain adaptation (Courty et al., 2017; Yan et al., 2018), OTDD is particularly suitable for distance estimation between cross-modal labeled datasets. In the following, we will briefly explain how ORCA uses OTDD. We refer the readers to Alvarez-Melis & Fusi (2020) for a detailed exposition on the metric itself.
Formally, let fs : X s → Ẋ denote the pretrained embedder (the part of ms that transforms the source data to sequence features) and f t : X t → Ẋ be a randomly initialized target embedder with architecture discussed in the previous section. We train f t to minimize the expected OTDD between the embedding-label distributions ( f t(xt), yt ) and ( fs(xs), ys ) . That is, for both datasets,
we first represent each class label as a distribution over the in-class features: y 7→ P (Ẋ |Y = y). This transforms the source and target label sets into the shared space of distributions over Ẋ . Then, we can define the distance dY(yt, ys) between different labels using the p-Wasserstein distance associated with a metric dẊ over the feature space, e.g., the l2 distance ∥ẋt − ẋs∥22. This allows us to measure the difference between distributions in Ẋ ×Y using the following p-Wasserstein metric:
dẊ×Y ( (ẋt, yt), (ẋs, ys) ) = ( dẊ (ẋ t, ẋs)p + dY(y t, ys)p )1/p . (1)
Plugging this into the OT formulation leads to the OTDD over Ẋ ×Y , which we optimize to learn f t. Leveraging the clustering structure of the datasets, OTDD provides a better distance estimation between the target and source data, demonstrating better alignment ability in practice. In Section 4.2.1, we examine several distance metrics for learning the embedder. While data alignment generally improves downstream performance for all metrics, OTDD leads to the best empirical results.
As for the computational cost of embedding learning, we analyze the complexity of OTDD in Appendix A.1 and show that this stage takes much less time than the later fine-tuning stage in Appendix A.5. Thus, ORCA achieves a significant performance gain at a low cost for feature alignment.
3.3 FINE-TUNING FOR DOWNSTREAM ADAPTATION
After training the embedder, we perform full fine-tuning by updating all model parameters to minimize the target loss. This step further aligns the embedder and predictor with the pretrained model to improve downstream performance. We perform an ablation study comparing ORCA to standard fine-tuning without feature matching in Section 4.2.1 and show that our approach improves prediction accuracy and reduces performance variance. There are orthogonal lines of work that study how to best fine-tune a pretrained model (e.g., Liu et al., 2022; He et al., 2022). We compare with one strategy used in FPT (Lu et al., 2022b) in Section 4.2.2 but leave further exploration for future work.
3As a concrete example, for an image tensor with embedding convolution kernel size k, the linear layer will yield an output of shape (S, k2K), which we transpose, pad, and reshape to (k2K,Hout,Wout). Finally, we apply pixelshuffle (Shi et al., 2016) to get an output of shape (K,Hin,Win).
4 EXPERIMENTS
Having introduced how ORCA addresses the dimension mismatch between the target and source datasets via architecture design and tackles the distribution mismatch via embedder learning, we now proceed with showing its empirical effectiveness. In the following, we will demonstrate that ORCA is the first approach that allows practitioners to obtain models better than hand-designed, AutoMLsearched, and general-purpose architectures on a variety of diverse tasks. Then, we will analyze key components of ORCA to better understand the mechanism underlying cross-modal transfer.
Experiment Protocol. While our workflow accepts a wide range of pretrained transformers as model bodies, we use RoBERTa (Liu et al., 2019b) and Swin Transformers (Liu et al., 2021c), which are representatives of the most studied language and vision modalities, to exemplify ORCA’s efficacy. We implement the base models, which have around 100 million parameters, and use the pretrained weights available in the Hugging Face transformers library (Wolf et al., 2019). As stated in the introduction, we do not use the exact pretraining data to represent the source modalities because they are often not publicly available and can be too large to compute OTDD efficiently. We use the proxy datasets: CoNLL-20034 for RoBERTa and CIFAR-10 for Swin.
For each task, we first apply the hyperparameter tuning algorithm ASHA (Li et al., 2020a) to the standard fine-tuning baseline (“Fine-tuning” in Table 3) to identify suitable batch size, optimizer, learning rate, and weight decay. These hyperparameters are then applied to all fine-tuning baselines as well as ORCA. During embedder learning, while classification tasks naturally come with discrete labels required for computing OTDD, for dense prediction tasks where labels are high-dimensional maps, we perform clustering on the dense maps to generate pseudo labels, which not only preserves the intrinsic distribution of the target data but also speeds up OTDD computation. We manage our experiments using the Determined AI platform. All experiments are performed on NVIDIA V100 GPUs and results are averaged over 5 random seeds. For other experiment details, see Appendix A.3.
4.1 CAN PRETRAINED MODELS TRANSFER ACROSS MODALITY TO SOLVE DIVERSE TASKS?
In this section, we highlight the most important observation of this work: cross-modal fine-tuning with ORCA can solve a variety of tasks effectively and efficiently. To demonstrate this, we evaluate ORCA on 13 tasks detailed below. We first include 10 tasks from NAS-Bench-3605, which covers problems such as PDE solving, protein folding, and cardiac disease detection. This benchmark contains tasks for 1D and 2D classification, 2D dense prediction, but not 1D dense prediction, so we added JSB Chorales, a music modeling dataset widely used for evaluating recurrent networks (Chung et al., 2017; Bai et al., 2018). We also added ListOps (parsing math expressions) (Tay et al., 2021) and Homology (classifying protein structure) (Rao et al., 2019) for comparison with FPT. Together, these 13 tasks represent a wide collection of modalities for comprehensive evaluation.
Following the taxonomy in Table 1, we consider three classes of baselines: (1) hand-designed expert architectures for each task, as identified by Tu et al. (2022), Rao et al. (2019), and Tay et al. (2021); (2) generalpurpose models, as represented by Perceiver IO (Jaegle et al., 2022b); and (3) AutoML baselines, as represented by those evaluated in NAS-Bench-360 and DASH (Shen et al., 2022). We will compare with FPT later, the only remaining approach with a general workflow from Table 1.
In Table 2, we report the prediction error for each method on each task. ORCA achieves the lowest error rate on 10 of 13 tasks and is the most effective in terms of aggregated performance. This is also supported by the performance summary in Figure 2. More specifically, we outperform all hand-designed architectures on all tasks except ECG, where we rank second but do much better than the other
4CoNLL-2003 is for named entity recognition. It is used to interpret language models (Jawahar et al., 2019). 5NAS-Bench-360 is designed for testing how well ML algorithms can generalize and is a core component
of the 2022 AutoML Decathlon competition. For a summary of included tasks, see Table 6 in the Appendix
methods. We also beat all AutoML baselines on all tasks except DeepSEA and NinaPro, where ORCA is second and third, respectively. The improvements from ORCA come at a small computational overhead associated with pretraining the embedder to match the source and target modalities. Table 5 in the Appendix shows the time needed for embedder learning with OTDD, which is a small portion (10.2% on average) of the fine-tuning time. ORCA’s efficiency and its state-of-the-art results on 10 tasks make it a practical tool for model development in diverse areas.
Our experiments further validate the findings in Lu et al. (2021) that pretrained transformers can learn knowledge transferable to seemingly unrelated tasks. In the following, we delve into the mechanism of ORCA to provide intuition for necessary components of successful cross-modal learning.
4.2 KEY FACTORS FOR SUCCESSFUL CROSS-MODAL TRANSFER
Here, we dissect the success of cross-modal transfer with ORCA through a series of ablation studies. As a preview, we identify three aspects to be key to its success: data alignment through embedding learning, full fine-tuning of all model weights, and suitable pretrained model selection.
4.2.1 MATCHING FEATURE DISTRIBUTIONS HELPS ADAPTATION
Table 2 shows that ORCA leads to effective transfer using the proposed three-stage workflow. However, as learning the embedder via OTDD is an instantiation of the general first-align-then-fine-tune paradigm, we ask the question: does cross-modal transfer work because of the specific application of OTDD or the core approach of data alignment across modalities?
To answer this question, we perform an ablation study on the embedding learning metrics and compare their performance to fine-tuning without embedder pretraining. We experiment with OTDD, maximum mean discrepancy (MMD) (Gretton et al., 2012), the mutual-informationbased TransRate (Huang et al., 2022), and Euclidean distance. The performance profile is in shown Figure 3, and the detailed results are shown in Appendix A.4. We highlight the following observations. First, bringing the target modality closer to the pretraining modality generally aids cross-modal transfer, regardless of which metric we minimize. This is evident from the fact that pretraining the embedder with any of the metrics can outperform vanilla fine-tuning without embedder learning on many
tasks. Second, among the evaluated metrics, OTDD leads to the best overall performance. This is why we use it in our workflow. The middle rows of Table 3 demonstrate that ORCA with OTDD consistently outperforms naive fine-tuning. This supports our argument that closing the gap between a new modality and the pretraining modality can facilitate a model’s adaptation to a new task.
To further isolate the impact of data alignment, we compare ORCA with a train-from-scratch baseline, which trains RoBERTa and Swin using only the target data for the same number of epochs. Table 3 shows that train-from-scratch is better than fine-tuning but worse than ORCA on many tasks like Satellite and DeepSEA, indicating that when the target modality differs significantly from the pretraining modality, naive fine-tuning may harm transfer, but aligning the feature distribution using ORCA can resolve this issue and benefit transfer. Indeed, recent work has shown that optimizing directly for the task loss may distort the pretrained weights and lead to suboptimal solutions (Kumar et al., 2022; Lee et al., 2022). By manipulating the target data distribution to look like the source distribution, we can lower the risk of catastrophic forgetting, which may explain the success of ORCA.
Lastly, we perform experiments to quantify the effect of data alignment from a task-wise perspective. We train the embedder for different number of epochs before fine-tuning to see how optimizing OTDD to various levels of convergence affects downstream performance. Figure 4 plots the finetuning accuracy along with the final OTDD objective for different levels of embedder pretraining. Evidently, as the dataset distance decreases, the final fine-tuning accuracy increases. This correlation supports the effectiveness of embedder learning for cross-modal transfer. In addition, we observe that learning the embedder prior to fine-tuning can stabilize training, as the performance variance of ORCA is consistently lower than that of standard fine-tuning.
4.2.2 FINE-TUNING ALL MODEL PARAMETERS FOR CROSS-MODAL TASKS
As discussed in Section 2, Frozen Pretrained Transformers (FPT) (Lu et al., 2022b) is a related work that showed pretrained language models contain knowledge relevant for out-of-modality tasks. While FPT presented a general pipeline that transfers GPT-2 to tasks like CIFAR-10, Homology, and ListOps, the resulting models were not as good as those directly trained on the target data. FPT differs from ORCA in that (1) it does not pretrain an embedder for task-specific adaptation and (2) it only fine-tunes the layer norms. We have already verified the importance of (1). Now, to isolate the impact of (2), we evaluate ORCA with fine-tuning the layer norms vs. FPT on our task set.
The bottom rows of Table 3 show ORCA with fine-tuning just the layer norms outperforms FPT, indicating pretraining the embedding layers boosts the cross-modal performance of FPT. However, this performance gain is smaller than that seen in the full fine-tuning setting, which implies that full fine-tuning can take better advantage of the learned embeddings. Also, partial fine-tuning is less effective than full fine-tuning on all tasks except for DeepSEA. This exception might be due to the fact that full fine-tuning without learned embeddings is more prone to overfitting. In terms of runtime, FPT only results in less than 2x speedups compared with full fine-tuning (see Appendix A.5), despite the fact that we are updating significantly fewer parameters. This is unsurprising since gradients are still back-propagated through the entire network. Therefore, when computation allows, we recommend using ORCA with full fine-tuning for better downstream performance.
4.2.3 PRETRAINING MODALITY CAN AFFECT TRANSFER PERFORMANCE
Finally, we study how the pretraining modality affects fine-tuning performance. For experiments in Table 2, we chose pretrained models for each task based on the input dimension, i.e., we use RoBERTa for all 1D tasks and Swin for all 2D tasks. Now, we can switch the model bodies and apply ORCA. This is easy to implement because ORCA is model-agnostic and the embedder architec-
ture handles all necessary input transformation to obtain sequence features. As shown in Table 4, fine-tuned RoBERTa outperforms fine-tuned Swin on the 1D task, and the final OTDD objective for RoBERTa is also smaller than that of Swin. We hypothesize that this is because the considered DeepSEA data (genomics sequences) are structured more like language than images with discrete units of information and general grammatical rules. The FPT paper observes a similar trend for Homology. As for the 2D tasks, we again notice that models with better fine-tuning accuracy have smaller OTDDs. This suggests a way of selecting pretrained models from a predefined model hub for each task, e.g., by comparing the optimized OTDDs and picking the one with the smallest value.
Case Study: Low-Data Regime. Now that we have a better understanding of ORCA, recall that one of our motivations for transferring pretrained models to various modalities is to help task-solving in data-limited regimes, where training models from scratch can be challenging. To this end, we investigate whether ORCA can facilitate fine-tuning large-scale models on small target datasets.
Indeed, for vanilla fine-tuning, a small amount of data may not give enough signal to update the pretrained weights. However, it is possible to obtain a good feature embedder with the same amount of data using ORCA, which can then make fine-tuning easier. In Figure 5, we vary the amount of target data and plot the performance of ORCA and vanilla fine-tuning. The performance gain of ORCA increases as the amount of data used decreases. This shows that fine-tuning does suffer from limited data, but ORCA can considerably alleviate the problem and improve downstream performance. Moreover, ORCA allows us to use a third of the data to match the performance of standard fine-tuning. Thus, it can benefit model development in domains where data collection is costly.
Discussion and Future Work. We identify several future directions based on our experiment results. First, it is worth studying the effect of pretraining modality further and develop a systematic way of selecting pretrained models. Then, we can incorporate model selection into ORCA for a more automated transfer pipeline. Second, while ORCA leverages the simplest fine-tuning paradigm, we believe it is possible to combine it with more sophisticated transfer techniques such as adapters (He et al., 2022). We briefly study how prompting (Bahng et al., 2022; Jia et al., 2022) can be applied to diverse tasks in Appendix A.6 and find that it is in general less effective for out-of-modality problems, so we can possibly boost its performance using ORCA. Lastly, we currently evaluate ORCA on diverse 1D/2D tasks and in-modality vision tasks (Appendix A.7). It is also important to validate it on more settings, such as high-dimensional problems and reinforcement learning (Reid et al., 2022).
5 CONCLUSION
In this paper, we argue that an important step towards developing more general ML methods is to study how we can reuse existing models effectively for new and less-explored tasks. To this end, we propose a novel framework that allows transferring pretrained transformers to distinct downstream modalities. Our method, ORCA, can map target data from an arbitrary end task’s modality to a model’s pretraining modality to improve fine-tuning performance. We believe that this work not only signals the potential of large-scale pretraining for diverse tasks but also lays out a path for a largely uncharted data-centric paradigm in machine learning.
A APPENDIX
A.1 EMBEDDING LEARNING WITH OPTIMAL TRANSPORT DATASET DISTANCE
A.1.1 LITERATURE REVIEW
Due to the limited space, we do not give a full review of the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) in the main text. Here, we briefly recall the optimal transport (OT) distance and explain OTDD in detail.
Consider a complete and separable metric space X and let P(X ) be the set of probability measures on X . For α, β ∈ P(X ), let Π(α, β) be the set of joint probability distributions on X × X with marginals α and β in the first and second dimensions respectively. Then given a cost function c(·, ·) : X × X → R+, the classic OT distance with cost c is defined by:
OTc(α, β) := min π∈Π(α,β) ∫ X×X c(x, y)dπ(x, y). (2)
When X is equipped with a metric dX , we can use c(x, y) = dX(x, y)p for some p ≥ 1 and obtain the p-Wasserstein distance, Wp(α, β) := (OTdpX (α, β)) 1 p .
Now consider the case of finite datasets with features in X and labels in a finite set Y . Each dataset can be considered a discrete distribution in P(X × Y). To define a distance between datasets, a natural approach is to define an appropriate cost function on Z := X × Y and consider the optimal transport distance. Indeed, for any metric dY on Y and any p ≥ 1, Z can be made a complete and separable metric space with metric
dZ((x, y), (x ′, y′)) = (dX (x, x ′)p + dY(y, y ′)p) 1 p (3)
It is usually not clear how to define a natural distance metric in Y , so instead we proceed by representing each class y ∈ Y by P (X|Y = y), the conditional distribution of features X given Y = y. More specifically, for a dataset D ∈ P(X × Y), denote this map from classes to conditional distributions by F (D, ·) : Y → P(X ). Then we can transform any dataset over X × Y into one over X × P(X ) via G(D) := (projX , F (D, projY )). As discussed above, Wp is a natural notion of distance in P(X ), so by substituting Y 7→ P(X ) and dY 7→ Wp in Equation 3, we can define the (p-)optimal transport dataset distance between datasets DA and DB by
OTDD(DA,DB) := OT (dpX×W p p ) 1 p (G(DA), G(DB)) (4)
A.1.2 COMPUTATIONAL CONSIDERATIONS
As we aim for a practical fine-tuning workflow, computational cost is a crucial concern. While Alvarez-Melis & Fusi (2020) proposed two variants of OTDD—the exact one and a Gaussian approximation, we observe from our experiments that optimizing the exact OTDD leads to better performance. In the following, we will focus on analyzing the computational cost of the exact OTDD.
Given datasets with D-dimensional feature vectors, estimating vanilla OT distances can be computationally expensive and has a worst-case complexity of O(D3 logD) (Pele & Werman, 2009). However, adding an entropy regularization term ϵH(π|α⊗β) to Equation 2, where H is the relative entropy and ϵ controls the time-accuracy trade-off, can be solved efficiently with the Sinkhorn algorithm (Cuturi, 2013). This reduces OT’s empirical complexity to O(D2) and makes the time cost for computing OTDD manageable for ORCA’s workflow.
During implementation of ORCA, we also observed memory issues for computing OTDD using the entire target and source datasets on GPUs. To alleviate this, we propose a class-wise subsampling strategy for approximating OTDD on GPUs (Algorithm 1). In short, we split the K-class target dataset into K datasets based on the labels and compute the class-wise OTDD between each singleclass target dataset and the entire source dataset. Each class-wise OTDD can be approximated with the average of batch samples similar to how stochastic gradient descent approximates gradient descent. After that, we approximate the OTDD between the target and source datasets using the
Algorithm 1 Efficient approximation of OTDD using class-wise subsampling.
Input: target dataset {xt, yt}, number of target classes Kt, source dataset S = {xs, ys}, subsample size b, subsample round R for each class i ∈ [Kt] in the target dataset do
weighted sum of the K class-wise OTDDs. To verify that the approximation works empirically, we track the approximated OTDD (computed on GPUs) and the actual OTDD (computed on CPUs) and visualize the loss curves during ORCA’s embedder learning process (Figure 6). We can see that the estimated value adheres to the actual value.
Leveraging both the Sinkhorn algorithm and class-wise approximation, the embedder learning process only takes up a small fraction of the total fine-tuning time in practice, as shown in Table 5. Hence, we invest a reasonable time budget but achieve significantly improved cross-domain transfer performance using ORCA.
A.2 INFORMATION ABOUT EVALUATION TASKS
A.3 EXPERIMENT DETAILS
Below, we summarize details for implementing ORCA and evaluating it on the selected 13 tasks. The code and configuration file for reproducing each experiment can be found in the supplementary material. We will also release ORCA’s best checkpoint for each task later.
A.3.1 PRETRAINED MODELS
We evaluated ORCA with two pretrained models in our experiments. In Table 2, for all 2D tasks including CIFAR-100, Spherical, Darcy Flow, PSICOV, Cosmic, NinaPro, and FSD50K, we use the following model. As Swin has a pretrained resolution, we reshape the inputs for our tasks to the resolution before feeding them into the model.
Name Pretrain Resolution Num Params FLOPS FPS
Swin-base (Liu et al., 2021c) ImageNet-22K 224×224 88M 15.4G 278
For all 1D tasks including ECG, Satellite, DeepSEA, JSB Chorales, ListOps,and Homology, we use the following model:
Name Pretrain Num Params FLOPS
RoBERTa-base (Liu et al., 2019b) Five English-language corpora 125M 1.64E20
We use the Hugging Face transformers library Wolf et al. (2019) to implement the pretrained models.
A.3.2 TASK DATA PREPARATION
For all the NAS-Bench-360 tasks, each dataset is preprocessed and split using the script available on https://github.com/rtu715/NAS-Bench-360, with the training set being used for hyperparameter tuning, embedding learning, and fine-tuning. We obtain the data processing script for JSB data from https://github.com/locuslab/TCN, for ListOps from https://github.com/kzl/universal-computation, and for Homology from https://github.com/songlab-cal/tape.
A.3.3 HYPERPARAMETER TUNING
As ORCA is both task-agnostic and model-agnostic, it can be applied to fine-tuning a variety of pretrained transformers on drastically different end tasks with distinct datasets. Hence, it is hard to define one set of fine-tuning hyperparameters for all (model, task) pairs. At the same time, optimizing large-scale pretrained transformers can be challenging due to their large model sizes, as the downstream performance depends largely on the hyperparameters used. For instance, using a large learning rate can distort pretrained weights and lead to catastrophic forgetting. Therefore, in our experiments, given a (model, task) pair, we first apply hyperparameter tuning using the Asynchronous Successive Halving Algorithm (ASHA) (Li et al., 2020a) to the standard fine-tuning setting (i.e., after initializing the embedder and predictor architectures, directly updating all model weights to minimize the task loss) to identify a proper training configuration. Then, we use the same set of hyperparameters found for all our experiments for the particular (model, task) combination. Note that even though we did not explicitly state this in the main text, the hyperparameter tuning stage can be directly integrated into the ORCA workflow between stage 1 and stage 2. In this sense, ORCA is still an automated cross-modal transfer workflow that works for diverse tasks and different pretrained models.
The configuration space for ASHA is as follows:
• Batch size: 32, 128, 512, 1024 for Swin; 16, 56, 256, 512 for RoBERTa
• Optimizer: SGD, Adam, AdamW
• Learning rate: 1E-2, 1E-3, 1E-4, 1E-5, 1E-6
• Weight decay: 1E-2, 1E-3, 1E-4, 1E-5, 1E-6
Note that to fit each experiment on a single GPU, we set a fixed batch size (32 for Swin and 16 for Roberta) and vary the gradient accumulation step instead of actually varying the batch size, but the effect is the same.
A.3.4 ORCA ONLY: EMBEDDING LEARNING WITH OTDD
After initializing the embedder architecture for each task, we train it to minimize the OTDD between the embedded target features and embedded source features.
For source datasets, we use CIFAR-10 for Swin and CONLL-2003 for RoBERTa. We sample 5000 data points to compute OTDD. In practice, we can pass the source data through the pretrained embedder once and save all the embedded features, so we don’t have to pay the cost of obtaining the source features each time we fine-tune a new model.
For classification tasks, we directly use the labels provided by the end task to compute OTDD. For dense tasks, we perform K-Means clustering on the target data to obtain pseudolabels for OTDD computation. The number of clusters is set to the number of classes of the source dataset, e.g., 10 for 2D tasks that use CIFAR-10 as the source dataset.
To compute the embedding learning objective, we use the OTDD implementation of the original paper provided here: https://github.com/microsoft/otdd. As for the hyperparameters, we use the batch size, learning rate, optimizer, and weight decay obtained from A.3.3. The others are fixed across different tasks:
• Embedding learning epochs: 60
• Learning rate scheduler: decay by 0.2 every 20 epochs
A.3.5 FINE-TUNING
Besides the searched hyperparameters, we also fix the following hyperparameters for fine-tuning.
• Fine-tuning epochs: 100 for Swin tasks, 60 for RoBERTa tasks • Learning rate scheduler: we use the linear decay with min lr = 0 and 5 warmup epochs
A.3.6 TRAIN-FROM-SCRATCH
This baseline is trained using the same hyperparameter configuration (number of epochs, batch size, learning rate, etc) as the fine-tuning baseline.
A.3.7 EVALUATION
When training/fine-tuning is finished, we evaluate the performance of all models following the NASBench-360 protocol. We first report results of the target metric for each task by running the model of the last epoch on the test data. Then, we report aggregate results via performance profiles (Dolan & Moré, 2002), a technique that considers both outliers and small performance differences to compare methods across multiple tasks robustly. In such plots, each curve represents one method. The τ on the x-axis denotes the fraction of tasks on which a method is no worse than a τ -factor from the best. The performance profile for our experiments is shown in Figure 2.
A.4 ABLATION STUDY ON EMBEDDING LEARNING METRICS
As motivated in Section 4.2.1, we present here an ablation study on the embedding learning metrics that we have considered for minimizing distribution dissimilarity. The results show that (1) performing feature alignment generally helps downstream adaptation, regardless of which metric we minimize; (2) OTDD leads to the best overall performance, so we chose it for our workflow. Our findings confirm that it is the general idea of data alignment, rather than a specific metric, that makes cross-modal transfer work.
Specifically, we experiment with OTDD, maximum mean discrepancy (MMD) (Gretton et al., 2012), the mutual-information-based TransRate (Huang et al., 2022), and (pairwise) Euclidean distance. We learn the embedders to minimize these metrics and then fine-tune the pretrained models. The test errors are as follows.
A.5 RUNTIME OF ORCA VS. FPT
In Table 3, we compare with the FPT setting, which only fine-tunes the layer norms of the pretrained transformer models. As we have shown already, the downstream performance of fine-tuning only a subset of the parameters is less competitive than fine-tuning all parameters. Below, we show that the time saved for updating only layer norms is also not that significant. Therefore, we suggest performing full fine-tuning when time and computational resources allow.
A.6 PROMPTING
Apart from fine-tuning, a new paradigm of working with large-scale pretrained models is prompting, i.e., we do not update the pretrained weights but only modify the input and query the model for the desired output. Existing language prompting methods (e.g., Liu et al., 2022) are generally not suitable for cross-modal learning due to the difficulty of designing natural prompts for diverse data types. For the 1D tasks we study, there is even no notion of “discrete tokens.” Another line of work studies visual prompting by modifying 2D inputs for querying vision transformers. We test two such algorithms, VP (Bahng et al., 2022) and VPT (Jia et al., 2022), on three classification tasks in our task suite. They are not applicable to the remaining tasks because either the inputs cannot be reshaped to look like images or the outputs are not classification logits.
We test VPT with the pretrained Swin-Base Transformer (the same model we used for ORCA) and VP with the pretrained ResNet-50 (as the official implementation does not support vision transformers). The results are shown in Table 9. In general, prompt tuning is less effective than fine-tuning, and the two baselines perform significantly worse than ORCA. This is not surprising given that prompting methods are more intuitively suited to in-modality transfer, where the target and the source data have similar structure or semantic meaning. However, when the target data (e.g., electromyography signals, as in the NinaPro dataset) is drastically different from image data, it is difficult to design prompts or expect good performance by only modifying the inputs without fine-tuning the pretrained models.
A.7 COMPATIBILITY WITH IN-MODALITY TRANSFER
A natural question to ask is whether ORCA can also tackle in-modality tasks. While we design ORCA to enable cross-modal transfer, we hypothesize that it should facilitate same-modality transfer if two domains have large dataset distance. To validate this, we test ORCA on DomainNet
datasets, which are commonly used to evaluate homogeneous DA methods (Peng et al., 2019). From Table 10, we can see that ORCA achieves significantly better performance than the fine-tuning baseline, which shows that the feature matching of ORCA can also help in-domain generalization. | 1. What is the focus of the paper regarding practical problems in adapting pre-trained models?
2. What are the strengths of the proposed approach, particularly in its significance and idea?
3. What are the weaknesses of the paper, especially regarding the definition and usage of OTDD?
4. How does the reviewer assess the clarity, quality, originality, and reproducibility of the paper's content?
5. Are there any concerns regarding the comparison with other recent proposed methods? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies an pratical problem that how to effectively transfer pretrained models for diverse tasks. To this end, the authors proposed ORCA, which conducts task-specific adaptation by minimizing the OTDD between the proxy source embeddings and target embeddings. Extenvise transferring experiments on different tasks have been conducted to verify the effectiveness of the proposed ORCA.
Strengths And Weaknesses
Strengths: In the big model era, how to effectively adapt pre-trained models to different downstream tasks is an interesting and practical problem. Therefore, studying the cross-modal transfer ability of large-scale pre-trained models is significant.
Weaknesses:
The key idea of the proposed method is resorting to the OTDD between the proxy source data and target data to fine-tune the given pre-trained model. However, the definition and usage of OTDD are not clear enough. More importantly, according to the unclear statement, OTDD seems to be a modified version of OT which introduces the label information for complimentary. From this perspective, the reviewer wonders how the OTDD works under the scenario where the source and target data have no share labels.
The authors argue that the proposed ORCA is an effective transfer learning method for pre-trained models compared to vanilla fine-tuning and hand-craft prompt tuning. Although the empirical results on two popular vision and language models (Swin and Roberta) show the superiority of the proposed method compared to the vanilla baselines. The effectiveness of ORCA should be further defended by comparing it to the recently-proposed vision and language prompt tuning methods such as [1]. [1] Visual Prompting: Modifying Pixel Space to Adapt Pre-trained Models
Clarity, Quality, Novelty And Reproducibility
Quality: The paper is likely to have a modest impact on the community. Clarity: The paper is well organized but the presentation has minor details that could be improved. Originality: The main ideas of the paper are not novel or have limited novelty. Please see the weaknesses. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.